Hey guys, is there any problem with

Hey guys, is there any problem with recursive queue consumers/producers? I have a consumer that produces to the same queue it consumes from and it seems like metrics are significantly delayed. That said though, my queue jobs do take ~30s to run each and can only run one at a time by design. Perhaps it takes some time for a new deployment to be hooked up as I'm testing some stuff and deploying pretty often. I can see that the queue I am using is increasing in size and my db is getting new data so it is consuming and producing. It seems that as soon as I stop producing to the same queue I am consuming from, the metrics return Another question that I am struggling to find any information on is how to interpret the execution duration GB-sec value on workers. What does this mean?
7 Replies
pat
patOP•3w ago
I redeployed after some time letting it run and I get a cpu time like in the photo. I am expecting each job to take approximately 200-300ms of cpu time per. What this indicates to me is that recursive queue consumers/producers treat continguous handling of events as single request. If I treat 200ms as a baseline, then I'd expect ~2400ms/200ms=12 messages to have been consumed. This seems fair with what I'm seeing. Hard to get the exact amount consumed, but I am just comparing to data in my DB.
No description
pat
patOP•3w ago
And after redeploying with an env var change that would cause it to recurse for another ~2 jobs worth of time then just consume and ignore jobs, I can see a spike of 500ms (2 jobs) then a more reasonably flat graph.
No description
pat
patOP•3w ago
For now I am going to try and fix this by separating it into 2 separate workers that publish to each others queue because I need a way to tell more accurately what my CPU time is. The amount of data I can ingest is limited by it. I had a look around and found this thread that says recursion is not recommended. Can anyone please give me some idea as to why? For anyone that happens upon this, it seems like its solved by acking the message or batch in code. batch.ackAll() or msg.ack(). This is some phantom knowledge I now hold
Pranshu Maheshwari
Pranshu Maheshwari•3w ago
I had a look around and found this thread that says recursion is not recommended. Can anyone please give me some idea as to why?
You run the risk of putting your Queue & consumer worker into an infinte loop. Why do you need recursion? retry() would be a better option to put messages back into a queue if you can use that
pat
patOP•3w ago
Maybe my definition of recursion and the one in that thread are different. By recursive I dont mean a retry, I mean the consumption of message A leads to message B being published. IE that the consumer produces to its own queue because we uncover more jobs by completing a job.
Pranshu Maheshwari
Pranshu Maheshwari•3w ago
Gotcha. that's ok! just don't get stuck in an infinite loop 🙂 What are you using Workers & queues for btw?
pat
patOP•3w ago
Overall goal is to ingest data from an external service that has rate limiting More specifically, I want to ingest League of Legends games into my db. As I can logically break it by match, I am sending a single message on a CF queue for each match and have a consumer that ingests the match. I can create jobs much quicker than I can consume them and I want it to happen in the background, so I'm just chucking as many matches as I need onto the queue and letting it figure itself out. Will have a CRON based worker see how many matches have been ingest + how many jobs are queued/in progress to tell how many more jobs I should push.

Did you find this page helpful?