TYoemtais.z
Redis or database for cache, distribiuted lock, signalR scale-out?
I am evaluating whether to use Redis or alternative methods for caching, distributed locking, and scaling SignalR applications in our new system. Despite my tech lead's skepticism towards Redis, I am exploring various perspectives to inform our architectural choices. Here are the details and considerations for each component:
1. Caching:
Requirements: Highly up-to-date data is essential.
Data Characteristics: Most keys involve a few hundred KBs, with rare instances up to 2-3 MB.
Structure: Multi-tenant database categorized by [companyname] and [company_name][storeX], with several hundred keys per store and expected growth to several hundred stores within a year.
-Approaches:
--Redis:
Key, for example, in the format [company][store][group/groups]_specific_name
For GET requests, check if the key exists; if yes, return the data from Redis; if no, retrieve and parse data from the database, then store it in Redis.
For POST/PUT/DELETE requests, process the data, save it, and subsequently remove related keys from Redis for the affected company and/or stores.
E.g. a change in store settings removes all keys affected by those settings i.e. had a "Settings" group
--Database:
Use a dedicated cache table within each [company_name] database. Key format and action analogously.
Using triggers to clear the cache is out, because the company's main database and store databases can affect each other in the results of requests.
2. Distributed Locking:
-Redis: Implement using a library such as RedLock.net.
-Database: Utilize a lock table with columns like [store][locked]. Check for existing locks before proceeding; if unlocked, set a GUID, verify it, process the data, and then clear the lock. If locked, retry after a delay.
3. Scaling SignalR (Websocket Notifications):
-Redis: Employ Redis as a backplane using a method recommended by Microsoft.
-Alternatives:
--Maintain a single instance of the SignalR application.
--Replace websockets with periodic polling every few seconds.
40 replies
Clean architecture and EF entities as domain models
Hi!
I have an architecture inspired mostly by Ardalis's Clean Architecture. It comprises 3 (theoretically 4) layers:
1. Controllers
2. Application
3. Domain
4. Repository (not doing much)
In the Controllers layer, there's primarily the execution of Commands/Queries through a mediator from the Application layer.
The Application layer contains minimal logic, mainly for validations, logging, and operations like sending information via SignalR
In the Domain layer, I have entity-aggregates that maintain a entity(made as rich domain model), interfaces for the repository, and services for each entity, it also includes specifications for data retrieval from the repository. In this layer there are other larger domain services that connect entity-services. They house a significant portion of the business logic that doesn't fit within the entity
I have mapping of entity <-> DTO in the Application layer, skipping the separate mapping from entity to model in domain-repository layer. I've had discussions where it was suggested that an additional layer of mapping from entity to model is crucial, with the idea being that entities should only exist in the Repository layer, while the Domain layer should have its own models.
I see two main issues with this approach. First, my current architecture allows me to fully utilize EF entity tracking, so I don't have to manually track what's changed. Adding another layer would require me to manage lists of added/removed/updated entities, map them, and then commit these changes to the database. This is problematic, especially with highly complex business logic where entities are nested. It seems time-consuming and prone to errors, especially since my entities and models would almost always have 1-to-1 correlation.
Secondly, I understand that an additional layer would be beneficial in the event of database changes. However, such changes would likely necessitate alterations to my models and all the mappings anyway. So, what's the real advantage?
15 replies
Polymorphism in C# makes no sense
Let's assume such a case:
And for the Car class I have an error
I could use IEngine instead of Engine in Car, which would fix the error. But then I can't use the methods of the Engine class.
I completely don't understand why I can't use Engine since this class implements IEngine.
I can't have different classes implementing ICar and having inside different classes implementing IEngine, they all have to be on the base IEngine.
This is terribly frustrating.
Instead of using interfaces in such cases, I have to get rid of them.
23 replies
API does not store keys from IdentityServer to validate tokens
I have two machines, one identity server and second REST API
A user through the API logs in, goes post to IS, in response IS returns JWT
The user authenticates himself to the API with each data request.
And everything was working fine, but today I noticed that there is an unusually high load on my IS
It turns out that every time the user queries the API for data, the API sends a request to IS.
But the API should itself check if the token is correct.
API is on
.NET 6
and LicenseServer is on .NET Core 2.1
API sends out as many as two requests to IS for each user request:
1. GET /.well-known/openid-configuration/jwks HTTP/1.1
IS response -> HTTP: HTTP/1.1 200 OK
2. POST /connect/token HTTP/1.1
IS response -> HTTP/1.1 400 Bad Request
And this is happening after the user has already successfully logged in.
IS setup:
new Client
{
ClientId = "native_clien
AllowedGrantTypes = new[] {"password", "client_credentials", "external" },
AccessTokenType = AccessTokenType.Jwt,
AccessTokenLifetime = 600, //86400,
IdentityTokenLifetime = 600, //86400,
UpdateAccessTokenClaimsOnRefresh = true,
AbsoluteRefreshTokenLifetime = 2592000,
AllowOfflineAccess = true,
RefreshTokenExpiration = TokenExpiration.Absolute,
RefreshTokenUsage = TokenUsage.OneTimeOnly,
AlwaysSendClientClaims = true,
Enabled = true,
RequireClientSecret = true,
ClientSecrets = new List<Secret>{
new Secret(configuration.GetConnectionString("NativeClientApiKey").Sha256()),
},
AllowedScopes = new List<string>{
"api_default",
"offline_access",
}
},
API setup:
services.AddAuthentication(o =>
{
o.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
o.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(o =>
{
o.Authority = configuration.GetSection("IsHost").Value;
o.RequireHttpsMetadata = true;
o.Audience = "api_default";
});
I've get 2x more load to IS than to API, when there should be a few dozen requests per day, not tens of thousands3 replies
❔ Weird issue with MediaR pipeline.
I have such a typical pipeline UnhandledExceptionBehaviour, which executes before executing command.
And it's works fine, for example for this kind of request:
But when there is no content, this pipeline is omitted. For example:
It is the request:
There used to be a
,
but I read to remove <Unit> in the new version of MediaR and it caused the code to no longer work as it used to.
Why?
3 replies
❔ Proper email backround service arch
Hi, need help. I have a booking app in which I will send confirmation emails, thank you emails, reminder emails, etc.
I plan to add an email schedule table in the database with information on what the template should be, when to send, etc.
I am using Clean Architecture where I have Domain Infrastructure Application and Web.
I think I should make the service for sending the email independent of the main application, but I would like it to have access to Domain and Infrastructure to communicate with the database.
I was thinking that this service should run in the background and every few minutes check the database to see if there is something to send.
Is this a good idea?
How do I add a domain link to this service? Through a DLL? Or is it enough to put it next to the domain and the rest or in the same soloution?
I want it separarte because main application will be on load balancer but this small service is enough to be on one machine.
Thank you.
6 replies
❔ Replace Include / ThenInclude with separate calls for entities
Let's assume a simple case. I have a class Fleet with properties Id, Name, and Cars as List<Car>. When I do context.Fleets.FirstOrDefault(x => x.Id == 1).Include(x => x.Cars) and then remove a car from the list and call SaveChanges, EF knows that this entity no longer has a connection with the given Car and removes the connection.
But what happens if I do the following:
var fleet = context.Fleets.FirstOrDefault(x => x.Id == 1);
var cars = context.Cars.Where(x => x.FleetId == 1).ToList();
fleet. Carss = cars;
What will happen if I now remove an entity from this list and save the changes? Will it be the same as if I had used Include?
If not, is it possible to make it work in a similar way?
For performance reasons, I need to remove all Include/ThenInclude and pull entities separately.
However, managing entities in lists in the main entity is convenient, and it would be nice if it could be done in a similar way
11 replies
❔ Should I make more transfer classes between layers?
This is also a more general question of whether my architecture makes sense.
I based it on Clean Architecture. (Similar to https://github.com/jasontaylordev/CleanArchitecture)
I have layers:
-Web
-Application
-Infrastructure
-Domain
In domain I have entities declared and domain services / aggregators and validators that deal with business logic.
Entities have very basic business logic, concerning only this entity but most of logic is in services/aggregators.
In application I have commands and queries where I call methods from the domain and eventually map to a DTO that comes out to web.
In application I also receive domain events to, for example, log what happened.
However, I have a problem. Suppose I have a class Car. To it I have a carService that, using the repository, adds, edits or retrieves cars.
And the method is, for example, _carService.UpdateCar(id, name, color, ... 10 more props) or identical _carService.AddCar(name, color, ... 10 more)
I would like to call this in a handler in the application, but passing so many parameters does not appeal to me. What is the best approach? Should I make another transfer class between the application and domain layer?
I was thinking to maybe creating record for such methods, that will act as transfer model for data.
But maybe I should create additional CarEntity, that will only act as transfer from/to DB?
The application will be large and developed over the years. Currently it contains more than 50 different types of entities.
Does my architecture more or less make sense?
7 replies
✅ Seeking Suggestions: Efficient and Reliable Real-time Data Update for Thousands of Web Clients
I have an API developed in .NET and several thousand web application clients. Whenever a change occurs to a specific client (e.g., by id), I need to send information about what changed/added. The solution needs to be highly reliable (99.99%) and fast.
Here's what I've considered so far:
1. Push Notifications: I've initially ruled this out because I don't believe they can be 99.9% reliable and consistently fast.
2. Pooling: If I were to go with pooling, I would send queries every second since the data must be super up-to-date. I would use caching and a list with the last change timestamp. However, I'm concerned about the high volume of requests from so many apps and potential cache fragmentation due to multiple machines for load balancing.
3. Websocket: I've discarded this option as I don't need bidirectional communication.
4. Server-Sent Events (SSE): This seems the most suitable for my needs in theory, but I encounter issues when scaling across multiple machines. I've read about solutions using Redis, but it seems to overly complicate matters and increase load.
Any advice or alternative solutions would be greatly appreciated!
146 replies