Worker orchestration idea might be overkill
hi,
I need to trigger a processing task inside a worker only 2sec and then once again 10sec after my worker got a POST request.
I want all POST requests that came between 0 and 2 secs to be batch processed as a time ordered batch.
(2s and 10s are arbitrary amounts and could change)
My idea was the following:
• my main worker caches POST requests's payload in a durable object
• main worker then pings an echo worker whose job is nothing more than starting a timer if idle and ignoring all subsequent pings until its timer fires off.
• echo worker pings back main worker who can now process all requests cached in the durable object
am i over-engineering it ?
3 Replies
just ran quickly through the docs of queues. it seems to be appropriate for my timeout based batching.
but if i ever need to trigger processes along an arbitrary sequence of time : 2s-10s-25s for example, im not sure it can cover it.
i need to dig deeper in documentation
Cloudflare Docs
Batching, Retries and Delays · Cloudflare Queues
When configuring a consumer Worker for a queue, you can also define how messages are batched as they are delivered.
yeah i've dug deeper in the docs.
i'm still not convinced its the way to go for my requirements:
My requests need to be handled on a caller-id basis. Meaning that all POST from
foo
should be batched & delayed separately from POST from bar
.
I don't see a clear and clean way to do that with queues.
I would have to track state for each caller-id (which is ok) and set delays manually like that:
await env.YOUR_QUEUE.send(message, { delaySeconds: 600 })
and finally set max_batch_size
to 1
( or max_batch_timeout
to 0
) which pretty much make the queue pointless.
the queue could still be helpful when it comes to long delays, as i can imagine worker up-time has some restrictions and costs, but beyond that i see no point.