11 Replies
Yes I'm using the hibernation API I believe.
I do a lookup against the state of websockets based on tag when replying, but I don't get that error so it seems the websocket is always there.
Browsers periodically send protocol level pings so I'd be surprised if this was the reason for premature disconnection.That was my understanding too, though I've read mixed accounts of what people have experienced online. It'd be cheap to test an application level ping I guess. I'm going to try another browser next. Ok I've reproduced in vanilla Chrome Version 127.0.6533.100. A more nuanced workload description is that my browser client sends about 2900 messages at once, receives immediate responses from the websocket server, then sits idle receiving further messages from the server with more data. First message from client -> server at 10:45am PDT, last message from server -> client at 10:49am PDT before the socket is closed and no more messages flow in any direction.
Huh, and you're saying the client receives 1006?
I'm on chrome connected to a DO I have, which I think is just the hibernation example we provide in our docs.
I sent a websocket message at 13:19, then waited until right now (13:54), sent another message and got a reply from the DO. My close handler never ran
Can you
wrangler tail
the DO? Do you see any errors in your dashboard?Huh, and you're saying the client receives 1006?That's right.
I'm on chrome connected to a DO I have, which I think is just the hibernation example we provide in our docs.Maybe I can try to deploy that then iteratively adjust to add the rest of the pieces in my current workflow to debug further. The websocket server I have receives a message, checks if it has a specific field, if not it queues it for further work, if so it returns it to the browser client. The queue consumer pops off messages, does work, then sends the result as a message back to the websocket server.
Can you wrangler tail the DO? Do you see any errors in your dashboard?I can but I notice logs stop after some time, and I see more continous logging coming from the log tab in the web UI so I haven't been trusting it much.
if not it queues it for further work,With an alarm()? Or something else? I wonder if the DO itself is just crashing for some reason. You can DM me your account ID I can try to take a look (but will be a bit busy with other stuff today)
npx wrangler tail
gives me:
Unknown Event - Ok @ 8/10/2024, 11:10:03 AM
Then I get this warning:
Tail is currently in sampling mode due to the high volume of messages. To prevent messages from being dropped consider adding filters.
Then the last message timestamp is:
Unknown Event - Ok @ 8/10/2024, 11:10:04 AM
There are more responses coming through the websocket still so the logs don't seem to keep up in tail
I just use request_queue_binding.send
to queue the message, then the consumer does work, looks up the websocket in the state (consumer is the same DO object, I guess), and sends the resulting message. It works fine until the 1006 error.🤦♂️ sorry, I forgot there's an issue with logs coming from our websocket hibernation methods. The runtime is emitting logs correctly but the service that interprets them isn't consuming them correctly as of late. We (Durable Objects team) don't contribute to that service but have let the codeowners know and it should be fixed soon
Ok cool that eliminates me being the issue haha
Is my account ID just my email?
Nope, there's some actual hex ID, let me see if I can figure out how to get it
wrangler whoami
I think that should work?Ah I have an
account_id
in my wrangler.toml as well(Don't paste here tho!!! this is still a public channel 😛 )
Send me a DM instead
Yeah will DM you, the
whoami
matches the account_id
in the wrangler.toml