❔ Dealing with parallel http requests in an API
I have an API that receives many requests before they've concluded executing in the api. This causes unwanted behaviour such as writing twice to the database with the same PK.
What options do I have to fix this? I'm honestly looking for the easiest to implement solution, at this point I don't care about good practices or performance, except for only the ASP.NET controller having to be async (so I can't drop all requests while I process one).
I've thought about putting requests in queues or channels, also thought about implementing some kind of lock that doesn't drop incoming requests (if that's a thing).
Ideas?
15 Replies
Are you using transactions and locks on the SQL side when inserting records?
SQL isn't my wheelhouse but I know that can help race condition situations.
From a C# perspective, if you don't have control over the SQL or adjusting the SQL doesn't help, then queuing is a great start, but introduces an intentional bottleneck to your application.
Also, I remember there being a class out there to perform bulk copies to SQL, so that might be a helpful path to research.
EF Core handles transactions on a per-request basis, postgres also queues up commands
EF is even further out of my wheel house
my issue is more dotnet related than SQL/EFcore I think
I just need to prevent the logic that decides what to do with the data to run in sequence
I tried removing async from the controller but that just drops the requests
What does your API endpoint do?
the controller sends the data to a service that inserts/ignores/treats data from the request
what's hapenning is e.g. 2 requests come at the same time, the service sees that they don't exist in the db, and both insert at the same time
They are both inserted?
Or one is inserted and the other errors out because of UNIQUE constraint for example?
yeah it throws a unique constraint error
which would be fine, but if the same PK is coming in, means the second one (second request) should be treated by service Y instead of X
this is a job for a queue, no?
Hmmm, yeah I think that would be the best option it would add some overhead for sure, catching the exception would be costlier
yeah overhead is fine
catching the exception and redirecting to service Y seems weird, the code shouldn't be attempting to insert in the first place
Yup, especially if this happens a lot
Shouldn't the database be able to handle that? If PKs are autogenerated by the DB it should queue up the insertions to not cause a conflict
I'm using UUIDs as PK
then how could you have two same UUIDs
Was this issue resolved? If so, run
/close
- otherwise I will mark this as stale and this post will be archived until there is new activity.