Took a quick look I can see your queue

Took a quick look - I can see your queue did at one point get up to a max concurrency of 7 (but it's out of range in the graph you screenshotted). Sometimes the dash takes a bit to update but I can see your backlog should be clear now. I'll check about the 'one worker consumes 4 different queues' bit but that shouldn't cause any issues.
11 Replies
Victor
Victor2y ago
I actually had to "clear" the queue manually by making a temporary deploy that ack's everything without any processing. Hence why the backlog dropped very fast, though I hadn't noticed this deploy actually allowed the queue to scale. Therefore would my code be preventing the queue from scaling maybe? In this particular queue, I'm throttling API calls to 1 request / second with Promise.allSettled([apiCallFn, timeoutFn]).
itsmatteomanf
itsmatteomanf2y ago
The concurrency is lowered if there is any kind of error in processing, so if messaged fail or are retried the concurrency will go back to 1.
Victor
Victor2y ago
My queues never throw but I do call retry on some messages. So here if I'm calling retry too often, it will prevent the queue from scaling? I don't think it scaled at all though, which should've happened at some point (even temporarily) if I understood correctly?
No description
itsmatteomanf
itsmatteomanf2y ago
From my understanding the retrying is the same as a failure. If it still doesn’t scale up, I’d just create a new queue. I had the same happen to me, recreating it solved things. @kewbish can check it though, don’t delete the old one.
Victor
Victor2y ago
so reducing the amount of .retry() calls did allow me to get some concurrency, as the shown by the backlog curve wobbliness
No description
Victor
Victor2y ago
problem: after some time, the whole backlog fails because of "Exceeded CPU Limit" I'm not really sure why that is the case
No description
No description
itsmatteomanf
itsmatteomanf2y ago
One thing you can do is re-add to the queue the same message instead of calling the .retry()... it's not exactly the same thing, though.
Victor
Victor2y ago
I'll stick with retry, given that I'm billed for API calls it feels safer ahah
itsmatteomanf
itsmatteomanf2y ago
You'd need to add a counter in the message, and send it to another queue after it's reached your threshold, don't go on forever lul
Victor
Victor2y ago
knowing me I'll forget to increase the counter at some point... 💀
itsmatteomanf
itsmatteomanf2y ago
You can still shard the queue on multiple ones if the speed isn't enough, with fetch() calls I' have seen concurrency not be really linear.
Want results from more Discord servers?
Add your server