Your Worker failed validation because it exceeded startup limits. Global Scope.
I love developing with workers, however I continue to get errors about the startup limits (CPU time). They seem quite random and hard to debug. Pushing multiple times sometimes works.
Coming from AWS Lambda, the global scope is no bad place to cache certain data. How should we do that in Cloudflare workers? E.g. Reusing Database Connections etc.
Let´s take this code, it wont deploy, however I am not doing anything in the global scope at coldstart:
6 Replies
Debugging CPU startup issue time in workers leaves a lot to be desired unfortunately. Profiling is very tough. Almost certainly one of the libraries you're importing
gpt3-tokenizer
, openai
, etc. is doing something in the global scope on startup that's exceeding the ~400ms CPU startup time you get.
You could try and load these async via await import()
if possible, depending on your use-case.As for reusing database connections, the recommended approach I believe is to use a Durable Object right now: https://developers.cloudflare.com/workers/learning/using-durable-objects/
Using Durable Objects · Cloudflare Workers docs
Durable Objects provide low-latency coordination and consistent storage for the Workers platform through two features: global uniqueness and a …
Thank you for your quick answer James 🙂 I will try to debug it further. The DX just suffers a bit when you have to pray for each function to ship after creating a function for the more limited Workers API.
For caching. Is it okay to dynamically populate all the global objects? initialize with null and set them on demand?
If you're confident there's no chance of anything leaking between requests when caching in global scope, then yeah that should be fine
chance (good enough)
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View