Bottlenecks in a SignalR Implementation for Managing Room Data?
I am using SignalR to manage real-time room data in a small practice app that I may have plans to scale up in the future.
The current implementation involves an in-memory dictionary where each room is mapped to a HashSet<userId> to track users in the room. So far, this does not seem like a bottleneck.
However, I suspect the number of persistent connections (SignalR maintaining a connection for each user) might become a scalability issue if I scale up. I am running it via dockerized .NET server on a cloud CVM with 4GB ram
Are there other common bottlenecks or pitfalls I should be aware of when scaling SignalR for this purpose?
Any advice or best practices for optimizing SignalR are appreciated
5 Replies
I think that memory and resource management will be the key as each connection will take extra resources etc. The same goes with the use of an inmemory dictionary when scaling up. You could gain some optimizations by having an efficient reconnect/disconnect logic and maybe its possible in your use case to use group based broadcasting instead of each user a connection.
Yea, I read that SignalR struggles with scalability in large-scale systems unless you implement a backplane (e.g., Redis or Azure SignalR Service)
However I am not sure at what point this would be necessary, hence am asking about potential pitfalls before they crop up
Unknown User•3w ago
Message Not Public
Sign In & Join Server To View
Ok, so the main worry then is how many users I expect to be connected at the same time with a persistent connection
How quickly do problems arise? Iike clearly there is no issue with 10ish users, but when will bottlenecks/choking begin to become a concern? 100 users? 1000 users?
I changed it to ConcurrentDictionary<>
Unknown User•3w ago
Message Not Public
Sign In & Join Server To View