(Relatively) high latency compared to other services
I noticed that Cloudflare Pages latency is (relatively) quite a bit high compared to Vercel and Netlify.
I sent 100 requests to each of these providers and measured the latency to download a txt file containing just
Hello!
, only counting requests that used an already established connection to the server:
Vercel:
Netlify:
Cloudflare Pages:
This is from a VPS from Hetzner in Germany. I get similar results from my home connection in the Netherlands. How come Cloudflare Pages has the highest latency, given the amount of PoPs Cloudflare has?
Don't get me wrong, I absolutely love Cloudflare Pages, just wondering if there's something unusual going on here.32 Replies
As far I understand. because you query the database. so it is better to have server close to database (like Vercel and Netlify, they use Serverless Function AWS Lambda under the hood) than the server close to user (like Cloudflare Worker for example). You can watch a video of Fireship explain about this (from 2.32):https://www.youtube.com/watch?v=yOP5-3_WFus
Fireship
YouTube
Is "edge" computing really faster?
Edge computing is becoming a popular way to deliver modern web applications. Let’s find out if the Edge is really faster by comparing Firebase to Vercel Edge Functions with Next.js.
#webdevelopment #javascript #vs
🔗 Resources Next Firebase Course https://fireship.io/courses/react-next-firebase/ Next.js 12.2 Release https://nextjs.org/blog/...
🔗 Resources Next Firebase Course https://fireship.io/courses/react-next-firebase/ Next.js 12.2 Release https://nextjs.org/blog/...
I am not querying a database, just serving a static file
Not using a serverless/edge function
Still wondering about this.
This is from a VPS from Hetzner in Germany. I get similar results from my home connection in the Netherlands. How come Cloudflare Pages has the highest latency, given the amount of PoPs Cloudflare has?Pages uses KV for static assets. KV only has two central stores, one in EU, one in US, and the rest of the locations just cache the value. More requests, more cache hits, less latency. I'd also check which location you are hitting. You can go to
/cdn-cgi/trace
path and look at colo=
to see airport code of location you are hitting. https://developers.cloudflare.com/ for example is a Pages Static Site with high traffic (and Enterprise routing).By "the rest of the locations just cache the value", do you mean the regular Cloudflare CDN cache, or something KV specific? Asking because I didn't notice the latency going down after a bunch of requests. Makes sense if it's the former, because I was sending requests to the
pages.dev
domain, which doesn't cache(?).
I already checked the colo and it was fine, my bad for not mentioning it.
Seems like you meant the latter, judging by this page: https://developers.cloudflare.com/kv/reference/how-kv-works/It's pretty close to the normal cdn cache, being colo specific, but not exactly it. It's on both pages.dev and custom domains, ignore cf-cache-status.
I would be curious what your latency is to something completely on edge like https://cloudflare.com/cdn-cgi/trace
~50ms is pretty high for cached results
Also, what you define as "fine"?
I already checked the colo and it was fine, my bad for not mentioning it.If you're using Hetzner falkenstein/germany, should be fra
Within the same country, I believe it was indeed FRA
AMS from my home connection, ~5 ms ping
HTTP:
ICMP:
Pretty close
But then, to a Pages file:
do you still hit a close pop if you ping hello-7v1.pages.dev or go to
https://hello-7v1.pages.dev/cdn-cgi/trace
?Just checked, still
AMS
(From the Netherlands, so that's good)do you get anything better if trying against
https://developers.cloudflare.com
?
(
robots.txt
because it's a very small file like my hello.txt)
(Seems like it returns CF-Cache-Status: MISS
every time)I was picking the index because I doubt robots.txt is very heavily visited/cached
ignore that, that's just cdn cache, pages/kv has its own internal
Same idea, just higher
dl
because the file is bigger:
Here's a file that does hit the CF cache (CF-Cache-Status: HIT
) for comparison:
hmm yea it's semi-known cf cache is faster then pages internal cache but that's a pretty decent jump
pages does have to go cf for saas -> invoke worker -> kv cache (which is slower on its own)
My thoughts exactly. worker processes are reused across requests when possible though, right?
ehh not really and I doubt workers themselves are the issue
And all the requests except the first are using an already established TLS connection, so it's probably safe to rule out CF for SaaS
Well if you have keepalive on, you'd keep hitting the same isolate
Yep
Workers share the same process for multiple isolates/customers though, the isolation is isolate-level, not process-level (except in some cases, docs do a good job of explaining it: https://developers.cloudflare.com/workers/reference/security-model/)
if you wanted to test against a raw worker: https://chaika.me
Does
/
invoke a worker every time?yes, you can't have cache in front of workers even if you wanted to
Makes sense. I wasn't sure about the terminology, I'll give that a read, thanks
Alright let me try
what tool is that as well? Not one I know of
so just a bit better then cache on average probably
I couldn't find one with all the features I wanted, so I made my own: https://github.com/GitRowin/httping
GitHub
GitHub - GitRowin/httping: A ping-like tool for HTTP(S)
A ping-like tool for HTTP(S). Contribute to GitRowin/httping development by creating an account on GitHub.
Yep
Probably Workers KV then?
Do you happen to have an endpoint using it?
I think it's more of a combined slowness thing
Sure: https://chaika-kv2.chaika.me
Pretty similar to the Pages numbers
still less then the robots.txt on dev.cf.com
I suppose
cdn-cgi trace being ~10ms, pages being ~38ms, cached being ~27ms and workers being ~25ms maybe isn't too bad though
Definitely