notoriousfeli
notoriousfeli
Explore posts from servers
TtRPC
Created by notoriousfeli on 2/28/2024 in #❓-help
Setting up tRPC for next.js with edge AND serverless functions
tldr: I am trying to build an app that uses both edge AND serverless functions by creating separate endpoints. Has anyone ever done this successfully? I am building an app that runs openai-API-requests and is connected to a Planetscale mySQL-DB with a prisma-layer (t3-setup). All functions are serverless. I am facing timeout issues on the openai-requests, since they are taking quite long. I am trying to switch all openai-functionalities to run on the edge. This means longer timeout-limits on vercel and the ability to stream the response. All the other functions need to remain serverless, since they use prisma, which isn't available on the edge. What I have tried so far: 1. Create separate context for serverless and edge 2. Create separate routers for serverless and edge 3. Create separate endpoints /pages/api/trpc/[trpc].ts and /pages/api/trpc/edge.ts that utilize the respective context & router 4. Introduce a rewrite for the routes in vercel.json: { "rewrites": [ { "source": "/api/openai/:path*", "destination": "/api/trpc/edge" } ] } 5. Created separate variables api and apiEdge in server/utils/api.ts to use in the frontend, but reverted this I am still not 100% if this is even possible. Has anyone ever done something like this? Am I on the right track? More specificly: Is it possible to create two endpoints in /pages/api/trpc? How do I correctly target these endpoints? How do I supply the endpoints with the correct context?
2 replies
TTCTheo's Typesafe Cult
Created by notoriousfeli on 2/28/2024 in #questions
tRPC setup for creating serverless AND edge api endpoints
Hey t3-lovers, tldr: I am trying to build an app that uses both edge AND serverless functions by creating seperate endpoints for the functionalites. Has anyone ever done this successfully? I am building an app that runs openai-API-requests and is connected to a Planetscale mySQL-DB with a prisma-layer (t3-setup). All functions are serverless. I am facing timeout issues on the openai-requests, since they are taking quite long. I am trying to switch all openai-functionalities to run on the edge. This means longer timeout-limits on vercel and the ability to stream the response. What I have tried so far: 1. Create separate context for serverless and edge 2. Create separate routers for serverless and edge 3. Create separate endpoints /pages/api/trpc/[trpc].ts and /pages/api/trpc/edge.ts that utilize the respective context & router 4. Introduce a rewrite for the routes in vercel.json: { "rewrites": [ { "source": "/api/openai/:path*", "destination": "/api/trpc/edge" } ] } I am still not 100% if this is even possible. Has anyone ever done something like this? Am I on the right track?
3 replies