sup filistine
CDCloudflare Developers
•Created by sup filistine on 2/6/2025 in #workers-help
Something broken with establishing websocket connection with Browser as a service provider
What the most heady-scratchy thing about this is that the issue stops happening randomly based on a little code changes that are supposedly noops, so that makes me think this resets some counter nearer to deployments.
5 replies
CDCloudflare Developers
•Created by sup filistine on 2/6/2025 in #workers-help
Something broken with establishing websocket connection with Browser as a service provider
The issue doesn't happen when using cloudflare's own browser rendering. That's weird because on the face of it, that also uses websockets.
5 replies
CDCloudflare Developers
•Created by sup filistine on 2/6/2025 in #workers-help
Something broken with establishing websocket connection with Browser as a service provider
Oh, I just noticed that the issue resolves momentarily post the deployment, but returns on subsequent requests - so perhaps I am running into some rate limit.
5 replies
CDCloudflare Developers
•Created by sup filistine on 2/6/2025 in #workers-help
Something broken with establishing websocket connection with Browser as a service provider
I know there is a simultaneous outbound connection limit of 6 in CF, I have never run into this before though.
The only other outbound connections we have apart from the wss (Zenrows) is some http connections for uploading our logs to betterstack.
5 replies
CDCloudflare Developers
•Created by sup filistine on 1/19/2025 in #queues
Queue retries not happening consistently with either message or global retry_delay wrangler setting
Yep, its working well now. Closing this thread.
5 replies
CDCloudflare Developers
•Created by sup filistine on 1/19/2025 in #queues
Queue retries not happening consistently with either message or global retry_delay wrangler setting
My bad as far as this report in that, i mistakenly put 30,000 in delay seconds on individual message, thinking i was working with milliseconds. i will adjust that and see where it leads.
5 replies
CDCloudflare Developers
•Created by sup filistine on 1/19/2025 in #queues
Queue retries not happening consistently with either message or global retry_delay wrangler setting
The dead letter queue is also not filling up, not sure whats up with that because when i did get lucky at one point and retries started happening on new messages, they didnt enter the DLQ after the failed attempt #4.
5 replies
CDCloudflare Developers
•Created by sup filistine on 1/19/2025 in #queues
Queue retries not happening consistently with either message or global retry_delay wrangler setting
To be sure, when i call .retry on a message, i dont call .ack. i think i am understanding the usage correctly.
I thought retry without any arguments will make use of retry_delay in wrangler.toml but i am not 100% sure about it, given i read that it applies for uncaught exceptions. My consumers are gracefully quiting because of a Promise.allSettled
5 replies
CDCloudflare Developers
•Created by sup filistine on 1/19/2025 in #queues
Queue retries not happening consistently with either message or global retry_delay wrangler setting
@Pranshu Maheshwari my account ID is c0ff9f2cf8ab2c3e11611df68fd4ac96. This is from my zones page. And the queue name is complaints-publisher.
You had offered me some help in another queue issue last week, but that one became obsolete for me and i am experiencing this one, but only in one of my envs, sadly production env.
Launched the feature today and would be great to get to the bottom of why the retries are not happening.
To elaborate:
1. Retries are never happening on the 30s interval that i set per message with .retry before exiting the consumer. They are happening haphazardly, oftentimes when newer messages arrive, other times, not at all.
2. With the retry_delay set to 300, i have seen messages retry themselves at 600s, not sure that was a coincidence because exactly then, new messages had arrived.
My consumer settings in wrangler are:
5 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
Thanks for getting in touch! For now, I realized I am okay with max_concurrency=1, and using a package like pLimit (https://github.com/sindresorhus/p-limit) for concurrency. I will be back if I ever need more than a concurrency of 1 from the queues themselves.
13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
I know you have said above that dropouts shouldn't be a thing, but I continue to see them and I have to launch this product in the coming month on this platform. It would be good to know about its characterstics, so that I can fine tune the configuration for minimum dropouts.
13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
@Pranshu Maheshwari Is having a max_retries=0 related to dropouts? I definitely see dropouts when that is set on my queue. Not even a first attempt at delivering a message to queue, in case, its handler that got the first batch is busy.
13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
i am using a remote logs sink in BetterStack but they ensure using ctx.waitUntil that logs arent missed.
also, in cloudflare dashboard for the consumer worker, i guess, i could look for queue events and count them, something i forgot to check last time, all i know for a fact is that there were no logs, and my consumer is pretty (log) chatty it terms of success or failure in processing. i currently catch and suppress the exception, opting for setting ack on both cases.
13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
hello, thanks for your response. the way i am detecting messages dropping is putting a log line as the very first line in my consumer’s queue handler. i never see that log appear in 7/10 messages in the very first dropout scenario i described. also, if i go to the dashboard for the queue, it doesnt show failures at that point in time.
so message 1 gets delivered.
60s elapse.
message 2 and 3 get through, which dont have to chronological 2 and 3 in my experience. there is a uuid on each message and the 3 selected out of 10 are not contiguous or according what the producer saw first.
i would be happy to do a virtual session and show a live demo of the issue, but given all i had to do was use a setTimeout queue handler implementation, i think you could repro the issue quite easily.
13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-

13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
Ref: https://grafana.com/docs/k6/latest/using-k6/scenarios/executors/shared-iterations/
With a shared_iterations k6 executor and setTimeout changed to 10s:
I saw 20 messages queue up quickly and not everything having to wait 10s before consumer logs showed up (I am using Betterstack for logging, so which is slightly more instantaneous than cloudflare dashboard). Concurrency remains around 0.8 and backlog switches between 8-10.
Successive runs seem to be even better. Repeated a few more times, and the concurrency is 1.1 and average backlog was 13. I am not sure if this means that the system is adaptive and learning, but how is it supposed to learn a 60s-100s delay in consumer processing with message failures?
Shouldn't it hold on to the messages if its training on the incoming dataset?
13 replies
CDCloudflare Developers
•Created by sup filistine on 1/5/2025 in #queues
maybe a short question on queues: x-
If I change the
setTimeout
to only wait 3 seconds, this kind of dropping of messages doesn't happen. I can even ramp up the k6 test to do 100 messages, and the system keeps up. Average backlog is only 2 or 3. Concurrency slowly rises from 0.5 to 0.6. I am sure the lower value to historically bad concurrency before I reduced the timeout. The time window on dashboard is 30m, of which, only last 10m or so were conducted with smaller timeouts.
Truth is that the production use-case I am trying to meet will take the consumer 60-100s - so how do I go about acheiving what 3s are giving me at those values?
The request rate won't be very high and I intend to put exponential backoff in the system on a per message basis depending on the current request rate. But for a baseline, it would be nice to know why they aren't scaling up for an occasional 10 messages.13 replies