Rate Limit in Next Middleware or TRPC Middleware

What do you think is the best way to rate limit your app with t3 stack. To me rate limit in next middleware seems much more efficient and logic, since you are on the edge and all of your app traffic is going trough there (static, pages, api, etc..) But, on the other hands, rate limiting your API with different procedures, for different scope comes with a cleaner organization imo. Since T3 stack can't run on the edge (unless you don't use prisma or some routes at least) You are forced to rate limit your API in serverless function. Maybe I'm getting something wrong, but for me rate limit in next middleware seems better since you can can rate limit every type of routes on the edges (would be insane if you can't pass next-auth context there too ๐Ÿ™ ). Is it really something worth the time to consider or is it useless and I should go whatever I want to?
2 Replies
justmaier
justmaierโ€ข2y ago
GitHub
GitHub - animir/node-rate-limiter-flexible: Count and limit request...
Count and limit requests by key with atomic increments in single process or distributed environment. - GitHub - animir/node-rate-limiter-flexible: Count and limit requests by key with atomic increm...
GitHub
civitai/rate-limiting.ts at main ยท civitai/civitai
A repository of models, textual inversions, and more - civitai/rate-limiting.ts at main ยท civitai/civitai
GitHub
civitai/[modelVersionId].ts at main ยท civitai/civitai
A repository of models, textual inversions, and more - civitai/[modelVersionId].ts at main ยท civitai/civitai
Nerap
NerapOPโ€ข2y ago
Thanks for the repo but the question wasn't about the tools but more about the best approch to implement a rate limit as a whole in this t3 stack
Want results from more Discord servers?
Add your server