Feature request: would be nice to be
Feature request: would be nice to be able to message set expiry (max age and/or max retries) at the message level.
We have a scenario where some messages are important to be processed eventually, and some messages should be discarded if not processed within a certain timeframe.
2 Replies
Interesting: in these scenarios could you route those messages to a separate queue? We're rolling out customized message retention periods on a per-queue basis in the next few days
Nice!
Well... we're using one queue at the moment because we don't have a better solution for the following requirement: all of the messages in question are performing operations on users in our database (either a specific user or a batch of a whole bunch of users). If these batch jobs are all running concurrently, then there's basically going to be a whole lot of contention, potentially trying to operate on the same users at the same time.
So we use a single queue with a concurrency of 1 for basically everything that operates on users. It could be a bit of a bottleneck... but the throughput is fine at the moment and it just allows us to eliminate contention between all these batch jobs. Can't think of a better way of achieveing this?
Some of these batch jobs are triggered basically on a CRON, so they should just be discarded it unprocessed by the time the next CRON triggers another.
And some batch jobs are critical events that must be ingested eventually and therefore not discarded.
Hopefully that makes sense to clarify our use-case.
Right now, we just concede that the CRON batch jobs will keep accruing if unprocessed, which is not perfect, but it only happens when some part of the processing pipeline is down anyway, so not the biggest deal.
I think it's been suggested before that it would be super cool to have a "sharded concurrency key" (for lack of better words), where for example we could use the user ID as the concurrency key so that any messages with a different key can run in parallel but those with the same key have limited concurrency.
But I digress, this would still require using the same queue anyway.