i suspect this has something to do with some request not being completed for some reason, which I wa
i suspect this has something to do with some request not being completed for some reason, which I was having issues with in the past. Is there a way to tell what it is waiting for?
12 Replies
is there a way to limit queue throughput? i have an upstream rate limit that i don't want to hit. i might have event spikes during the day and i don't want to drop requests so i want to limit myself to the set throughput (relatively speaking it's low, otherwise i wouldn't be asking for this)
like i want at most 10 messages per second. i have
max_concurrency
on 1 but if the 10 messages in the batch complete faster than in 1 second then it doesn't workyou could simply wait a second after each message in the batch. Then if you have all 10 messages in a batch, then it shouldn't complete more than 1 per second
for example with this function, just call
await wait(1000)
to wait a secondthis does not count as cpu time
you could of course decrease that wait period if you want to process message faster than one per second (you could use 100ms to process at most 10 per second)
I am seeing the same thing both with low concurrency and timeouts
So the queue backlog is huge
There's a few reasons why consumers might not autoscale (documented here: https://developers.cloudflare.com/queues/configuration/consumer-concurrency/#why-are-my-consumers-not-autoscaling) (cc @ajgeiss0702)
If you're seeing timeouts @ac, it most likely means your Queue consumer is taking too long to process a message. You'll need to refactor your consumer to process the messages in under 30s of CPU time
Cloudflare Docs
Consumer concurrency | Cloudflare Queues
Consumer concurrency allows a consumer Worker processing messages from a queue to automatically scale out horizontally to keep up with the rate that messages are being written to a queue.
Thanks, I will take a look. Ultimately I believe the issue is browser rendering limits/the CF Browser Renderer holding onto processes for way longer than it's supposed to, which causes the whole queue to be backed up
I saw this, and I don't think it applies:
1. Max concurrency is not set
2. No errors are reported in the queues dashboard
3. batches are processed every 15 minutes (because of the other issue I mentioned) and it hasn't really scaled up
It has scaled up to 2 concurrency a few times, but hasn't gone past that, and it didnt stay on 2 for very long
I would like to ask about billing related issues. I am currently facing a problem
Remaining time on 11 × Images Stream Bundle Basic Stream storage per thousand minutes
What is the fee?
?crossposting
Please do not post your question in multiple channels/post it multiple times per the rules at #😃welcome-and-rules. It creates confusion for people trying to help you and doesn't get your issue or question solved any faster.
I'm trying to implement a simple queue. My worker sends a queue message to the consumer. I can see the message in the queue via the dashboard, but for some reason, the messages are just sitting in the queue, and processing doesn't start for 20 minutes. Do you have any ideas about what can cause this? I have logging and I can see that it is just sitting in the queue - no errors or anything.
The timeouts and settings are default. I've spent a couple of hours but can't find a solution. I hope it is not an unannounced service disruption. 🙃
Now it took a message to start processing - 4m 38s 😖