Error: Promise will never complete

I have a worker that does some processing of image frames that are posted to the url and stores them in R2. This works fine with single image frames. When I post more than 50 frames, The worker dies after the 50th iteration through the processing loop and throws the error: "Error: Promise will never complete" This worker is on the free bundled usage model, so I know subrequests are limited to 50, but I don't think I'm making any subrequests except to R2, which per the docs seems like it should allow 1000 requests even on the bundled usage model?
11 Replies
zegevlier
zegevlier2y ago
Are you making the requests using the binding or using the R2 URLs, only the binding has the higher limit
Jason Hostetter
Using the binding When I check the dashboard under Requests -> Subrequests, it says "No data available", i.e. none made, although I'm not sure if it's just not recording subrequests because the process is dying? But I would think if R2 binding calls were included it would log those for the invocations that succeed with <50 frames...
kian
kian2y ago
I don't think R2/KV/etc appear in the subrequest graph Are you using WASM for this Worker? WASM & hanging Promise errors are pretty common
Jason Hostetter
Yes I am Is there a reason it would die after exactly 50 invocations?
kian
kian2y ago
I doubt it's something specific to the number 50 and rather some other issue that just so happens to scale enough to be an issue at 50 frames Unless you're doing fetch or cache requests - they're the only things limited to 50
Jason Hostetter
Only caching final responses, not for each frame. I'll have to find another multiframe image to test. You said these errors are common with WASM - is this because of how people are loading/interfacing with the wasm modules, or some issue with how the platform is handling it? Also, if it were a memory issue related to the wasm module, would that show up in the dashboard as an "Exceeded Memory" error? It says I have zero of those fwiw. So hopefully that's not the cause
kian
kian2y ago
Is the WASM being instantiated outside of the fetch handler or anything like that?
You said these errors are common with WASM - is this because of how people are loading/interfacing with the wasm modules, or some issue with how the platform is handling it?
No idea, that'd be one for the runtime team - I've just seen it pretty often with stuff like workers-rs or esbuild-wasm
Jason Hostetter
Yeah it's being instantiated in an imported module.
kian
kian2y ago
A common issue is that a given request is awaiting on a Promise from another request - and the runtime sees that a given request has no more I/O so it thinks that request will never generate a response.
kian
kian2y ago
GitHub
workerd/io-context.h at 5ad958127b05c968f62c018be15ad465d73e0a3a · ...
The JavaScript / Wasm runtime that powers Cloudflare Workers - workerd/io-context.h at 5ad958127b05c968f62c018be15ad465d73e0a3a · cloudflare/workerd
Jason Hostetter
hmm. ok, I will try to do some more testing and see if i can get more info thanks Does it mean anything if this only happens in production, not locally? Trying with an 11 MB file with 99 frames now. Works as expected locally, but in prod it waits 2.5 min, then returns an Error 1105 "Temporarily Unavailable". No logs in the dashboard or with wrangler tail Ok - to close the loop, I think I fixed this. It does look like it was probably a memory leak related to the wasm module, creating a new encoding buffer for each frame rather than reusing the same buffer. It now works as expected. Is there a reason it throws the cryptic "Promise will never complete" errors rather than just a "Memory limit exceeded" error? I feel like that would make debugging a lot easier!
Want results from more Discord servers?
Add your server