hey! is there any (inofficial) goal for queues performance? will it ever be comparable to eg sqs? se
hey! is there any (inofficial) goal for queues performance? will it ever be comparable to eg sqs? sending to the queue is currently multiiple 100ms, and it takes multiple seconds for a worker to receive it. and throughput is much much lower at the moment.
16 Replies
I haven't run into messages dropping or batches dropping. I have built some interesting architecture where I have a message-producer API which abstracts all the applications from needing to know about how to send messages. We used a KV bound to the message producer API to know what queue to route to and bind the worker to any queue. I know there are limits to the binding, but this API has made it easy to onboard queues for a few different use cases. I'm ready for this to move to GA. What issues is the team waiting on solving before moving it to GA?
Unknown User•3mo ago
Message Not Public
Sign In & Join Server To View
is there a plan to have messages be "grouped" and sent that way? i have a merchantId prop on my messages and i want to handle up to 20 messages per merchant, but there isn't really a way to do that right now
it would be really neat if i could do that honestly
Hi there, just wanted to know if there is any way to see which buckets are sending notifications to a queue
Couple queries:
Any plans to add a
Message.deadletter
method? I have some use cases where I know the message will always fail and it needs manual triage.
It'd be great to have redrive functionality to move messages from the deadletters to processing queue once any issues with the deadletters have been resolved (e.g. temp outage). AWS SQS has similar functionality https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.htmlLearn how to configure a dead-letter queue redrive in Amazon SQS - ...
Learn how to configure an Amazon SQS dead-letter queue redrive to move messages from a dead-letter queue to a source queue or other standard queue.
Also, are there plans to suppport passing a
Message
into an RPC Worker?I don't see wrangler.toml's equivalent of
[[queues.consumers]]
referenced in https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/#bindings -- is that intentional? if so, how does one configure a Worker uploaded from the API to be a queue consumer? I've added "type": "queue"
in my bindings metadata, but that only seems to register the worker as a queue producerIt's done via this API: https://developers.cloudflare.com/api/operations/queue-v2-create-queue-consumer
Cloudflare API Documentation
Interact with Cloudflare's products and services via the Cloudflare API
ah, I forgot to mention that I was trying to create queue consumers for user-uploaded scripts. I filed https://github.com/cloudflare/workers-sdk/issues/6758
Via WfP?
yep
Can you not just have the dispatcher call the user Worker?
Also, I don't think WfP actually supports any kind of events other than one's sent by the dispatcher atm
that’s the direction I was heading, by passing messages received in the dispatcher’s queue consumer to the user worker via a POST fetch. but how can the user worker call message.retry(), since the messages will be serialized into JSON?
It returns an object back with which messages to retry/ack
it is just me, or does
delaySeconds
no longer work?
it looks like it stopped working on 9/17 around 18:00 UTCits still listed on the dev docs still, so it wasnt removed, was it?