Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

I'm getting `Queue sendBatch failed: Bad

I'm getting Queue sendBatch failed: Bad Request despite my requests being correctly shaped. I am sending 1000's of messages in multiple parallel sendBatch requests however. Is it possible this is the 5000 messages produced per second limit instead?...

I really would like to use Cloudflare

I really would like to use Cloudflare Queues instead of a third party provider, but consumer location not being affected by smart placement (round trips become a huge issue), concurrent consumers only scaling at the end of a batch and the 6 simultaneous open connections per worker instance result in concurrency autoscaling not working as expected and taking too long to scale up. My queue backlog becomes huge. I get that the magic of autoscaling would be great but reading people complaining about the same thing shows that we are not there yet, or maybe we are just holding it wrong. I believe things would be way better if consumers would scale up as messages comes in, or if we had a min_concurrency setting (of course I don't know how viable it would be for us to have those). I'm really frustrated by the results of trying to use Queues again and again since the beta and still hitting the same problem....

How is the consumer location chosen? Is

How is the consumer location chosen? Is it affected by smart placement?

is it within the road map for http push

is it within the road map for http push ? ( sending a message via http without worker )

On the worker queue producer side, I

On the worker queue producer side, I have a tough time understanding why the worker sometimes just fails to complete the await env.QUEUE.send() ? From testing , I noticed "warmed up" invocations to the worker wouldn't have this issue but when it's idling after a while it would (for a good amount of time) just spins itself down before sending to the await env.QUEUE.send() call. request IDs affected: 930d4183797951e2 930d2b064f611f41...

Same problem here, `Consumer Delay` is

Same problem here, Consumer Delay is about 32 seconds. My configuration: ``` max_batch_size = 1 max_batch_timeout = 0...
No description

Hi, what kind of consumer delays are

Hi, what kind of consumer delays are normal with queues? I have a simple low volume case where a durable object puts individual messages to a queue, and then a worker picks up and processes those messages. I want to process the messages one by one without any delay, so I have configured delay as 0 and batch size as 1. The actual execution in my worker is very fast, 100-300 ms as expected. But there seems to be some strange delay before the worker picks up the message from queue, and when I look for the queue metrics in console it says "Consumer Delay: 3.4 sec". So I feel I am loosing an extra 3 seconds somewhere, which means this is not OK for any customer facing online use case. Don't have much experience with queues so I don't know if this normal or not, but I was expecting the added latency to be in tens or hundreds of milliseconds....

Is Queues going to be getting any love

Is Queues going to be getting any love from dev week? Free tier?

Super cool. A few of questions:

Super cool. A few of questions: 1- Can we trigger it programmatically? 2- Do you plan to also allow purging based on message params? Like conditionally purge them?...

Pretty sure something got broken with

Pretty sure something got broken with this release. In my logs, I can see that adding to one of my queues starting failing ~2 hours ago. The error looks like this: internal error; reference = odmj851jl3gua27r036349h7

Quick question if I may, but will queues

Quick question if I may, but will queues eventually add support for remote dev? Currently any worker that makes use of queues still isn't able to make use of things like quick edit, edge preview, remote dev, etc...

Is there any issue with queue? because

Is there any issue with queue? because message in queue is resolving very late! !

Where do queue consumers execute? I've

Where do queue consumers execute? I've tried some searching here, but can't seem to find an authoritative answer... I mean you could have event fed into a queue from all over the world, but to ensure ordered handling, do consumers only execute in 1 geographic location? Meaning the latency from a producer queueing a message to a consumer processing a messaging will vary significantly and so queues should not be used for anything "real-time-ish" - is that correct?

Did the queues `sendBatch` api change?

Did the queues sendBatch api change? I've been running a worker in production for around 7 months now, and it has been working fine. The past few days (first occurance was Febuary 21st, 23:40 UTC, and the previous successful evocation was 23:10 UTC), i've suddenly been getting an error: sendBatch() requires at least one message I've added an additional check in my code to make sure the batch isnt empty to fix the issue, but I missed 2 days of data (that I cannot recover) before I noticed the issue....

~~What would cause a queue to send the

What would cause a queue to send the same msg multiple times? Is there a timeout i might have set wrong? Errors cause retries.... 🤦 which cause more errors 🤦 🤦

thanks, any plans to increase those

thanks, any plans to increase those limits?

Build software better, together

setting this in queues consumer
[[queues.consumers]]
queue = "gopersonal-llm-ranking-queue"
max_batch_size = 100
max_batch_timeout = 180
max_concurrency = 1
[[queues.consumers]]
queue = "gopersonal-llm-ranking-queue"
max_batch_size = 100
max_batch_timeout = 180
max_concurrency = 1
and getting the error...

Hello. What would the average queue

Hello. What would the average queue backlog be, on a healthy queue? Our right now keeps on being 760, with a delayed backlog of around 530..

Feature request: would be nice to be

Feature request: would be nice to be able to message set expiry (max age and/or max retries) at the message level. We have a scenario where some messages are important to be processed eventually, and some messages should be discarded if not processed within a certain timeframe....
Next