Is there any way I can find how much time my worker is taking to start up?
Currently I have a fairly big JS code in a worker for rendering my pages from the edge. I would like to know how much the size of this JS is slowing down every request due to parse/compile time for this code.
7 Replies
Afaik if it is under the 1 MiB limit, then it would throw errors if it took too long
Not sure this limit works for enterprise plans. My compresses JS is around 3.5M.
Is there any place where I can see the time limit to startup?
All those limits are here:
https://developers.cloudflare.com/workers/platform/limits/
A Worker must be able to be parsed and execute its global scope (top-level code outside of any handlers) within 400 ms. Worker size can impact startup because there is more code to parse and evaluate. Avoiding expensive code in the global scope can keep startup efficient as well.
Not sure this limit works for enterprise plans. My compresses JS is around 3.5M.Workers Paid get up to 10 MB workers, but any Worker larger then 1 mb becomes slower, longer cold startup times
Limits · Cloudflare Workers docs
Cloudflare Workers plan and platform limits.
Thanks @Chaika, now would be great if I could get the information of exactly how much time my worker is taking for being "fat".
One reason I need this info is because I intend to optimize it. But it is hard to do it or even prioritize it without haing real numbers.
I know it is less the 400ms now otherwise it would be throwing an error.
hmm yea I'm not sure how it would be best to measure that, I think you can access some performance information via miniflare local dev, espec now it's based on the same runtime (workerd) as they use in production, but not sure how close it would be. I can say though if the Worker is over one 1 mb it gets put in a slower cold storage which is then cached after use, and I don't imagine that Worker code fetching is part of the start up time. Although getting it under 1 MB may not be feasible
1MB compressed right? I think this is might be a good target.
I can say though if the Worker is over one 1 mb it gets put in a slower cold storage which is then cached after use@Chaika , Just to be sure. Where did you get this info? Is this inside info?
Wrangler warns about this exact thing:
▲ [WARNING] We recommend keeping your script less than 1MiB (1024 KiB) after gzip. Exceeding past this can affect cold start timeIf you wanted an employee confirmation: https://discord.com/channels/595317990191398933/812577823599755274/1127168079252693022 more context: https://discord.com/channels/595317990191398933/779390076219686943/1095801998907015198 yes, compressed is what matters