I didn’t see this graph before. But yeah, you can see that for the whole 2 hours, each minute (other

I didn’t see this graph before. But yeah, you can see that for the whole 2 hours, each minute (other than a few blips) I was charged 23 GB-sec per minute, even though the requests graph shows there being no requests at all during that period (other than the initial connection requests). This was one of the durable objects I was testing during that period. I had another with the exact same usage graph.
No description
16 Replies
Jacob Marshall
Jacob MarshallOP4mo ago
Actually the other durable object showed slightly different usage. Not too sure why it’s billed cheaper.
No description
Jacob Marshall
Jacob MarshallOP4mo ago
Hmmm. This seems wrong haha. This is the “Request Wall Time” graph for the second durable object. The first durable object shows a normal number.
No description
Jacob Marshall
Jacob MarshallOP4mo ago
Ok so the first graph shows 3x more GB-sec usage, and that probably lines up with the extra websocket connections I had to the first durable object. I'm probably gonna need to open a support ticket because this doesn't make much sense. But I also know how... not helpful support is 😦
Vero
Vero4mo ago
Hey devs, we've added one more video to our series on Real-time Apps with Durable Objects🚀 Part 6: https://youtu.be/ojhe-scYVe0
Cloudflare Developers
YouTube
Making and Answering WebRTC Calls | Build a Video Call App Part 6
Welcome back to our series on building a video call app with Durable Objects! In this video, we’ll build on the frontend we set up earlier by adding functionality for making and answering WebRTC video calls. You’ll learn how to create peer-to-peer connections, handle ICE candidates, and seamlessly send and receive video streams between users. Do...
xeon06
xeon064mo ago
I'm curious about connection limits on a single hibernated Durable Object... My use case is I want to be able to push a small message from my server when there is a config change to my website. In theory it's all the same "channel" for everyone, so a single durable object makes sense, but of course I could have thousands of users connected on my website at once. I found this page which suggests a sharding mechanism: https://community.cloudflare.com/t/durable-object-max-websocket-connections/303138 I'm curious if anyone has done any research / prior work on this topic? I already have a DO per user for a different purpose, so another idea is to leverage these to push that message, however I would have to find a way to hit all the stubs with active users. Perhaps registering the stub IDs in KV when someone connects and deregistering it after the last disconnect would work..?
Hard@Work
Hard@Work4mo ago
How big are the payloads and how often are they sent? A single DO should be able to handle a few thousand WebSockets pretty easily, assume you don't hammer it with changes
xeon06
xeon064mo ago
Just a small JSON object with a couple properties and very very rarely, thanks for the feedback!
Milan
Milan4mo ago
First thing I'd try (if you need to scale up to a lot of connections) is: - have dedicated DOs that clients are connected to via WS, we'll call it Node - have your server send a message to another DO (called Root), which tells all the Node DOs with connections to broadcast the new data to clients When a new client wants to connect via WS, your worker can hit an endpoint on each Node DO like /numConnections to figure out how many connections that node currently has. If it has, for example, 1000 connections, then you could iterate to the next DO and check how many connections it has. If all your existing DOs are full, go to a new one idFromName('node_n+1'), and store in Root that you've got n+1 DOs. Then, when your server has an update, it sends a request w/ the json to your Root DO, and it sends it to each Node in the set {1, ..., n+1}. In doing this, your Root only receives messages when: - Server sends an update - You need to create a new Node because the others are full Your Node only receives a message when: - A new client wants to connect - There's an update from the Root Would need to consider a way to decrease the number of active Nodes over time, and maybe doing a sequential numbering scheme for Nodes wouldn't work great, (ex. if Node_3 goes away cuz all your connections die, then you have a gap between 2 and 4). Maybe hashing to distribute requests would be better ¯\_(ツ)_/¯ There are probably also other hacks for decreasing duration / request amplification 😉
Hard@Work
Hard@Work4mo ago
I would just be worried a bit about overcomplication and (very slightly) elevated costs. Assuming that messages are rare and small, and clients are in the thousands(high or low), a single DO should be more than enough to serve that traffic, no?
Milan
Milan4mo ago
Yup, a single DO should be totally fine w/ a couple thousand connections, but it's Sunday and we're supposed to over-engineer things on Sundays 😛
Hard@Work
Hard@Work4mo ago
Oh... I thought it was shipping to prod on Sundays right before a holiday Monday?
Milan
Milan4mo ago
I suspect some folks have scaled to a very large number of WS that eventually get updates from a single root, but only way to verify is to hear from devs who've done it. Maybe partykit has?
Hard@Work
Hard@Work4mo ago
partysub probably
npm
partysub
[!WARNING] > This project is experimental and is not yet recommended for production use.. Latest version: 0.0.8, last published: 22 days ago. Start using partysub in your project by running npm i partysub. There are no other projects in the npm registry using partysub.
Hard@Work
Hard@Work4mo ago
Oh wait, that one is distributed, I think
xeon06
xeon064mo ago
Another naive way might be to use something like a geohash to sort people into "bins" geographically, but then you still need need the DOs to self register as having clients somewhere
Want results from more Discord servers?
Add your server