128 MB Workers memory limit when using R2

I'm trying to understand the 128 MB memory limit for Workers and what it means for my use-case. The documentation says Only one Workers instance runs on each of the many global Cloudflare network edge servers. Each Workers instance can consume up to 128 MB of memory. Is there any way to monitor how much memory is actually used when my Worker is handling one request? Also, I have a very simple use-case with a worker that supports GET and PUT to put files directly into and read files directly out of a R2 bucket. Those files might be big. Is any significant memory allocated in the Worker for those files during that process? For example, if I have 1000 concurrent requests of people trying to download a file that is 100 MB big, could that cause issues? Would I likely run into any limits?
6 Replies
kian
kian2y ago
Downloading the files, assuming you just pass the object body as the response as-is, doesn’t buffer the body so it’s a non-issue with regards to memory
denchi
denchiOP2y ago
Perfect, that's what I was hoping.
kian
kian2y ago
Ditto with PUT - as long as you’re sending the binary data as the request body rather than using FormData, it won’t buffer
denchi
denchiOP2y ago
Got it. Let's say I wanted to check the first few bytes of a file before putting it into R2, would that be possible without allocating memory for the whole file?
kian
kian2y ago
Not that I’m aware of - you can only read a body once and if you tee/clone it so that you have two, they must be read in parallel or one of them will be buffered into memory
denchi
denchiOP2y ago
Okay. So if I wanted to do that, I would likely run into the memory limit. Thanks for clearing that up 🙏
Want results from more Discord servers?
Add your server