C
C#3mo ago
TYoemtais.z

Redis or database for cache, distribiuted lock, signalR scale-out?

I am evaluating whether to use Redis or alternative methods for caching, distributed locking, and scaling SignalR applications in our new system. Despite my tech lead's skepticism towards Redis, I am exploring various perspectives to inform our architectural choices. Here are the details and considerations for each component: 1. Caching: Requirements: Highly up-to-date data is essential. Data Characteristics: Most keys involve a few hundred KBs, with rare instances up to 2-3 MB. Structure: Multi-tenant database categorized by [companyname] and [company_name][storeX], with several hundred keys per store and expected growth to several hundred stores within a year. -Approaches: --Redis: Key, for example, in the format [company][store][group/groups]_specific_name For GET requests, check if the key exists; if yes, return the data from Redis; if no, retrieve and parse data from the database, then store it in Redis. For POST/PUT/DELETE requests, process the data, save it, and subsequently remove related keys from Redis for the affected company and/or stores. E.g. a change in store settings removes all keys affected by those settings i.e. had a "Settings" group --Database: Use a dedicated cache table within each [company_name] database. Key format and action analogously. Using triggers to clear the cache is out, because the company's main database and store databases can affect each other in the results of requests. 2. Distributed Locking: -Redis: Implement using a library such as RedLock.net. -Database: Utilize a lock table with columns like [store][locked]. Check for existing locks before proceeding; if unlocked, set a GUID, verify it, process the data, and then clear the lock. If locked, retry after a delay. 3. Scaling SignalR (Websocket Notifications): -Redis: Employ Redis as a backplane using a method recommended by Microsoft. -Alternatives: --Maintain a single instance of the SignalR application. --Replace websockets with periodic polling every few seconds.
9 Replies
mtreit
mtreit3mo ago
Use Redis.
Unknown User
Unknown User3mo ago
Message Not Public
Sign In & Join Server To View
TYoemtais.z
TYoemtais.z3mo ago
Thank you very much for such a detailed answer. I would really like to go for Redis; database caching seems to create more issues than it solves. And yes, short pooling is such a bad idea, but my tech lead is very skeptical of SignalR and Redis. But thankfully, it's my decision for this project.
Unknown User
Unknown User3mo ago
Message Not Public
Sign In & Join Server To View
TYoemtais.z
TYoemtais.z3mo ago
The argument is mainly that it adds complexity to the project and is overkill, that redis is used for other things and there is no need to use it here. And I think it's mainly because throughout his career he used regular cache as memory cache, not as distribiuted and only short polling And what do you mean with that?
Unknown User
Unknown User3mo ago
Message Not Public
Sign In & Join Server To View
TYoemtais.z
TYoemtais.z3mo ago
I asked and complexity and overkill is the reason
Unknown User
Unknown User3mo ago
Message Not Public
Sign In & Join Server To View
TYoemtais.z
TYoemtais.z3mo ago
I think when we have more than 30 pods of front api and over 80 pods of other services, distributed cache is a must. We will also have more than 1000 busines customers and over few milions of end users. We are creating new architecture/infra from scratch. So the table with cache will have millions of rows of valid keys so searching in database will take time. Also dozens of connections only for cache is not optimal IMO. We are thinking with my devops what to use for redis cluster or just stay with single instance but it's a different question.