I have a question how the memory is
I have a question how the memory is managed for DO:
« If your account creates many instances of a single Durable Object class, Durable Objects may run in the same isolate on the same physical machine and share the 128 MB of memory. »
If we have 2 DO instances of the same class that running on the same colo, and share the same 128MB of memory. Imagine one currently uses 100MB and the other 27MB, then the second increase his memory usage and so we reach memory limit of the 128MB shared memory.
What's happening? Is this second instance is killed and recreate automatically in exactly the same state on another 128MB VM of the colo ?
15 Replies
Isnt this a horrible restriction? I would never have known this were it not for this thread, and I see it buried under pricing, rather than mentioned under learning (https://developers.cloudflare.com/workers/learning/using-durable-objects/) or APIs (https://developers.cloudflare.com/workers/runtime-apis/durable-objects/) or limits (https://developers.cloudflare.com/workers/platform/limits/#durable-objects). In fact, under limits, one gets the impression that in principle, there is NO limitation to DOs:
"Durable Objects have been built such that the number of Objects in the system do not need to be limited. You can create and run as many separate objects as you want. The main limit to your usage of Durable Objects is the total storage limit per account - if you need more storage, contact your account team."
The likelihood of any of your durable objects sharing memory is so insanely low because there are so many servers that can run them that you will probably not run into that issue.
Not to mention there would need to be multiple DOs active at the same time for that limitation to even occur so a way less likely event
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
Well yeah I guess if you have more DOs active than machines but I assume most people wont hit that
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
Not to mention there would need to be multiple DOs active at the same timeThere certainly will be. (Whether it would be more than the number of m/c I dunno - that depends on how many machines are there etc). But in general, certainly a great many DOs will be active. So now this is a problem, or? Coz the docs are saying one thing, while the reality is that DOs are not really very scalable today, and the real limitation is not storage but memory.
Personally never had any issues with DOs for high scale... 35 million monthly requests to DOs
Thank you @unsmart for this info, thats good to hear. Would you also be able to mention roughly how many DOs of the same class are running?
Uhh like 300ish
Also fwiw R2 uses DOs as well... not sure if they did anything special to remove the shared memory limit though
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
So, from what you have quoted above, does that mean the issue of sharing memory (and hence being reset/killed etc) occurs only if one were to use globals?
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
Anything that allocates and retains memory (cannot be GC'ed) will increase the size of your isolate. You really cannot plan on grabbing and using all of the advertised memory described in the public docs, since other instances can and will be scheduled in the same isolate. Really wish there was a guaranteed ceiling option for DOs, or a larger memory limit option.
If I deploy the same worker + DO code (mjs) under different names and paths, will the DO be shared, and thus fall into the definition of "many instances of a single Durable Object class"?
Or is (worker x DO class) unique?