Would ya'll use Turbo/NX for a monorepo with 2 or 3 services?
So my architecture problem is that I have a user facing web app (Thinking Remix for this, not that it really matters) but I want a secondary service to handle long-running jobs that'd be orchestrated via a message queue, so that I can deploy my Remix app regionally and scale to zero as appropriate and scale the background worker individually. The two services would share a Prisma schema and probably some types/functions so monorepo seems like the easiest way to do that.
I started trying to setup a Remix app and a TS API service with NX but found it fairly complex without using the built-in generators. I've never used monorepo architecture before so I'm wondering if my use case is simply too simple to warrant tools like NX? I could probably get away with rolling my own second TS app within the Remix and setting up a second Dockerfile & Fly.io config but was unsure if that would kneecap me later as I scale compared to using proper monorepo tooling.
3 Replies
@Zan Turbo is fine for this usecase.
Try reading this article + repo.
https://www.chernicki.com/blog/infinitely-scalable-applications
https://github.com/supabase-community/create-t3-turbo
Infinitely Scalable Applications
How to grow from one application to an ecosystem of apps and packages.
GitHub
GitHub - supabase-community/create-t3-turbo: Clean and simple start...
Clean and simple starter repo using the T3 Stack along with Expo React Native and Supabase - GitHub - supabase-community/create-t3-turbo: Clean and simple starter repo using the T3 Stack along with...
I forked off of this for my mono repo. https://github.com/clerk/t3-turbo-and-clerk
GitHub
GitHub - clerk/t3-turbo-and-clerk: A t3 Turbo starter with Clerk as...
A t3 Turbo starter with Clerk as the auth provider. - GitHub - clerk/t3-turbo-and-clerk: A t3 Turbo starter with Clerk as the auth provider.
Only issue I have with Turbo is that it doesn't support bun run times. So will likely need to refactor before deployment if I want things to run in the bun runtime in the future. Which I will for the performance gains. Can serve 2x the requests at scale and can be 18x faster at it.
Then eventually break out heavy bandwidth api routes to Go or Rust.