setting this in queues consumer ```[[queues.consumers]] queue = "gopersonal-llm-ranking-queue" max_b
setting this in queues consumer and getting the error
✘ [ERROR] A request to the Cloudflare API (/accounts/920b1a6e159cf77dab28969103a4765b/queues/c30e85e6726c455b880498715d8a0b4c/consumers/5bad5c90b2644f498177c45a164e96ac) failed.
Queue consumer (type worker) has invalid settings: maximum wait time must be between 0 and 60000
ms. [code: 100127]
If you think this is a bug, please open an issue at: https://github.com/cloudflare/workers-sdk/issues/new/choose if we put the value of
If you think this is a bug, please open an issue at: https://github.com/cloudflare/workers-sdk/issues/new/choose if we put the value of
max_batch_timeout
to 60 it works.., that means the error message is in milliseconds and the actual limit is 60 seconds?GitHub
Build software better, together
GitHub is where people build software. More than 150 million people use GitHub to discover, fork, and contribute to over 420 million projects.

18 Replies
yep you got it, the max batch wait time is 60 seconds [1], the backend rest api takes milliseconds [2], so wrangler multiplies the config value by 1000 [3], making the error message confusing : )
[1] https://developers.cloudflare.com/queues/platform/limits/
[2] https://developers.cloudflare.com/api/resources/queues/subresources/consumers/methods/create/
[3] https://github.com/cloudflare/workers-sdk/blob/0322d085f634c1a0a12a59b4db293088d0cadb62/packages/wrangler/src/deploy/deploy.ts#L1236
thanks, any plans to increase those limits?
someone orange would have to speak to that - personally I would love to see larger values allowed there and also the batch size 100msg/256kb lifted a bit as well
Did the queues
sendBatch
api change?
I've been running a worker in production for around 7 months now, and it has been working fine.
The past few days (first occurance was Febuary 21st, 23:40 UTC, and the previous successful evocation was 23:10 UTC), i've suddenly been getting an error: sendBatch() requires at least one message
I've added an additional check in my code to make sure the batch isnt empty to fix the issue, but I missed 2 days of data (that I cannot recover) before I noticed the issue.
Is this a recent change in the queues api? I would appreciate it if you could ping me when you reply 🙂Hello! How does one deploy multiple queue handlers (for different queues) in one single worker deployment project in wrangler?
So in my main index.ts I would like to have multiple queue consumer handlers for different queues.
`
You can only have one consumer function in your worker code, but you can get the name of the queue on
MessageBatch
: https://developers.cloudflare.com/queues/configuration/javascript-apis/#messagebatch
This way you can effectively handle multiple different queues and just change the code you execute depending on the queue nameah I see! Very helpful! And so then I can just bind multiple queue consumers to this worker and then my handler would only be able to read from messages send to the consumers that were bound from it is that right?
Correct, yep!
i remembered queues was made free
damn, i just had to test the tf provider...
has anyone succesfully gotten browser rendering to work with queues? I can't even launch a browser without it locking up the worker https://github.com/cloudflare/puppeteer/issues/93
GitHub
[Bug]: Using the sessions API causes hanging queue · Issue #93 · cl...
Minimal, reproducible example import { WorkerEntrypoint } from "cloudflare:workers"; import puppeteer, { type Browser } from "@cloudflare/puppeteer"; export default class extend...
When I had browser rendering working with queues, i was running into all sorts of issues.
I would recommend writing a Durable Object to handle the browser rendering, and calling that from the queue
Not ideal as then you have to pay for wall time
Unknown User•5w ago
Message Not Public
Sign In & Join Server To View
true. You could probably also make it work with a regular service binding.
there is a limit increase request form at the bottom of the limits page: https://developers.cloudflare.com/queues/platform/limits/
Cloudflare Docs
Limits · Cloudflare Queues
1 1 KB is measured as 1000 bytes. Messages can include up to ~100 bytes of internal metadata that counts towards total message limits.
I'm also experiencing this issue. There seems to be something with the Browser Rendering/Puppeteer API that is causing the queue worker to hang and then ultimately fail. I don't see any meaningful logs in the worker or browser rendering GUI either other than the timeout/disconnect log.
Where do queue consumers execute? I've tried some searching here, but can't seem to find an authoritative answer... I mean you could have event fed into a queue from all over the world, but to ensure ordered handling, do consumers only execute in 1 geographic location? Meaning the latency from a producer queueing a message to a consumer processing a messaging will vary significantly and so queues should not be used for anything "real-time-ish" - is that correct?