The combination of API routes and data loading functions
I have been trying out integrating my existing API routes with
cache
and action
deploying to cloudflare.
The goal is to have the benefit of 1) preloading route data 2) having an open API 3) and being able to easily log API accesses.
The examples with SS seem to want you to utilize server functions which would not be an open API and (I dunno) how I would log properly
If I call fetch
to an API route inside cache
with no use server
then it can be called on either client or server
I need cookies/credentials, so calling on client is good because the browser will pass them, but on server there is no credentials and CF errors
I can utilize use server
to make each call a server function that calls an API route, then extract the headers from the event and wire into the fetch
to make it as close as possible as if I called the API route from the browser
But this seems very non-ergonomic.. It seems as though deciding to use cache
and action
means not having a good time with API routes, and instead opting to using only server functions34 Replies
Your process of passing through headers to the fetch call is correct, but is not unique to cache/action/server functions, you have to do it in any SSR solution where you
fetch
on the server
If it were me I’d make the frontend use server functions and the openapi one a separate system, and have them reuse the same logicMy plan was to have the openapi and "tap into" it with the server functions (by passing headers)
As far as sharing logic I am not sure because it would not allow me to log properly
If my server function just consumed the logic of its equivalent API route, and the client called the server function, what would the server function log as far as the name of the route or resource I accessed? I guess I could manually label each server function to tell it what its corresponding API route is, but this begins just making the API routes redundant
The problem with using fetch on the server is that you’re double-invoking your api, it’s not all done in one request.
If you wanted to still use server functions but access the api like it was REST, then I’d use something like ts-rest that you can invoke inside server functions, and expose as an external api
I agree that is the key problem: every request would be _server call to an API route call
I am not too familiar with ts-rest but I am not sure how this solves the double-call problem. would not a server function call then call an RPC-like call? and would I not have to also pass headers?
If I were to expose the ts-rest setup as an API, and not use the native API routes, it gets back to my original idea of "walking down the path of using cache/action leads to turning away from API routes"
I definitely dont want the complexity of doing an API-route-first approach and trying to "hook into" it with server functions. But I did this approach for a unified model and being able to test with REST tools like Postman. It also makes logging easier because the api route path is within the request object. The downsides seem to be complexity in passing headers and double-calling. This stems from the fact that
fetch
behaves differently with credentials depending on whether its called on client or server.
I am now thinking of a server-function-first approach. I can wrap each function with some logic to handle errors, and attach the name of the function to event.locals for logging. I can reuse the logic of the server functions inside any API route I need to be public. If that all goes well, the only thing I think I am losing is the ability to test with REST tools like Postman/Insomnia, because not all of my server functions would have an equivalent API route (if they did thats a lot of headache and coupling)I am not too familiar with ts-rest but I am not sure how this solves the double-call problemThe server function would, via ts-rest, handle the request inside itself, rather than doing a double fetch
If I were to expose the ts-rest setup as an API, and not use the native API routes, it gets back to my original idea of "walking down the path of using cache/action leads to turning away from API routes"If I understand correctly, you want to generate an OpenAPI schema as well, which Start's API routes aren't capable of doing on their own anyway. You may as well just use API routes as an entrypoint to a more capable router, be that ts-rest, hono, express, or whatever
nah I dont mean OpenAPI I just meant have my API be "open" as in public.
Ah in that case ts-rest wouldn't be as useful.
Ultimately, there's no way to use API routes in ssr that doesn't require a double-fetch and passing headers along (at least not in an isomorphic fashion) - that's part of the advantage of server functions. If it were me I'd move the core logic to a separate package and instrument that with logs, and then expose it over both server functions and REST for Postman. I guess it depends whether the REST-level instrumentation is super important or you'd prefer to avoid the double-fetching.
yeah, sounds like its a choice between double-fetching or maintaining two separate parallel handlers for the logic (one being server functions and the other API routes)
I appreciate your feedback on this!
Hi. Actually I've addressed this by making a separate client just for SSR fetch.
It's basically a polyfill of fetch but you have to inject the REQUEST header + the RESPONSE header that initiated the page render request in the data loading function.
Here's how I do it in trpc:
As much as people advice not to call the API again. I think in terms of maintainability, it helps because I usually use my routes in either a SPA environment (credentials are present) and SSR environment (where I still want to pass credentials, but the only need is to really hydrate the HTML for SEO or Social Share stuff).
I would personally never mess with telefunc or a "server function" that calls some data access object on a page data loader. I'm comfortable just double-calling the API.
since you're using tRPC i'll mention that we made a custom trpc link that uses a server function instead of raw
fetch
.
no need to pass through headers or anything since you can just use vinxi's getHeader
or the actual event from getRequestEvent
.
https://github.com/mattrax/Mattrax/tree/main/packages/trpc-server-functionLooks awesome! I'll go check it out!
probably not too useful for sabercoy since they want a REST-compatible API, not a trpc/server function style one, but yea might be useful for you
yeah, I was sitting here trying to figure out how I could use this 😂
I was also going to make some wrapper that could isomorphically fetch
but then theres double calling
I've done this for Hono as well actually which is btw technically rest compatible.
If you use axios, you can pretty much do the same thing of making an initializer before making a GET or POST call in the server.
const client = initAxiosSSR(requestHeaders, responseHeaders)
client.POST()
the request headers would be sent and the response headers would be sent back when the data loader finishes.that will still double fetch though right?
Yes it will. But in my days of just building
getServerSideProps
apps with NextJS. Don't think it's really that big of a deal.yeah fair
btw if it wasn't clear this doesn't double fetch
lol, yeah this is already miles ahead
I had also thought that if this is deployed on an edge runtime like CF workers, double fetching probably does not have a huge cost since the api endpoint will be right there in the network
but that is in terms of response time, perhaps there is double the cost in terms of pricing
i wonder if we could make a fancy fetch that on the server reaches into the server runtime and executes api routes directly
also @Numnumberry did you settle on a routing solution for your api?
I actually thought this was already possible at first. I thought I remembered seeing that you could use an internal fetch function to skip making another request
but upon looking at it, it seems it just makes it to where you dont have to "worry about the origin of the URL".. whatever that means, I guess talking about CORS
hm where's that quote from?
where you dont have to "worry about the origin of the URL".. whatever that means,this is saying you can just
fetch("/api/some/path")
instead of fetch("https://my.website/api/some/path")
I have been thinking about it for hours and testing things. I have been looking at cache server functions and how they behave with ErrorBoundary. I have been looking at what sort of wrapper I could create for server functions
one issue is I do not know how to log a server function
if I have an action that mutates a resource, I want to log it, but the name of the function is a cryptic hash and number, not useful for identifying what was called
oooh, okay.. well thats even less useful than I thought XD but still cool
yeah with server fns you have to specify all that stuff yourself, their references can change when the file changes. you'd need to hardcode the name you log
and then i assume you'd need a way to associate logs within that function with the name of the function?
i haven't done much logging work outside of console.log lol
I also find that, on initial load, multiple cache server functions can be called during SSR (so there is only 1 event, that is the event of loading the initial page, so if something were to be logged it could only be that "I loaded the initial page")
where as if you revalidate a cache key, you will call one or more cache server functions to get new data (which now are there own events to be logged)
so this inconsistency is what I have been thinking about
I have been thinking maybe only log mutations/actions and not "GET"s for new data
hmm i don't think that's accurate
yeah, it just feels bad when I have the name of the function right there
and then I hard code a string that is the name of that function to log (coupling) lol
if the same
cache
is called multiple times then you'll only get 1 log yeah, but if you call 3 different cache
functoins during SSR you'll get 3 different logs
that's a behaviour of cache
though, not server functionswhen I say "log" I dont mean console.log, I mean log the request that was made to the server (in my case I log it in MongoDB)
on initial load, there is only 1 request for the document, this request will call every cache server function it needs and stream in the result
ah yep there's only 1 network request
so yea you'd need to instrument each server function individually
yeah, something like that is what I have been looking into (and also an error handler/catcher)
the nice thing about API Routes, even if you double call, is that 1 request equates to 1 function which is 1 log
but in the case of cache server functions and SSR and revalidations, the former could be the case, but ALSO you can have 1 request equate to multiple functions for 1 log
so its just me trying to wrap my head around what a maintainable solution would be and if its even worth it
yeah if logging's a big deal and you don't want to build all the instrumentation yourself i can understand just going with a rest api
at least stuff like hono lets you use an rpc-like interface for your rest routes
in the end, I just want to be able to look back and see, for every data access (whether reading or writing) who did what and when
yeah whatever works in the end
looks like there are wonk bundling issues when trying to wrap server functions