IcyyDicy
Explore posts from serversTTCTheo's Typesafe Cult
•Created by IcyyDicy on 1/6/2025 in #questions
Architecting a task queue
Heyo! So me and some university buddies want to host an LLM prompt engineering event where people try to steal or prevent the theft of the password in the hidden system prompt. "The vision" is to have everyone's user prompts go up against everyone's system prompts and have everyone iterate to get better and better prompts.
To boil this down, after some person submits a new system prompt, the system should queue up LLM tasks to run that system prompt against other people's user prompts. Same thing but vice versa for any new user prompts. Of course, it should drop any queued tasks that don't have the latest prompts.
We're planning on architecting this using three parts:
- The front end that will take in the two types of prompts and display how they worked (NextJS)
- The back end to match up system prompts and user prompts and send them out to available LLM workers (Plain old Node + Express)
- Self hosted LLM workers (Multiple, separate computers!) (LocalAI , The computer lab has the compute available already and we don't want to pay for chatgpt tokens :p )
My question is, what would be a good way to glue all of this together? I'm not too worried about implementing some sort of self-cleaning priority queue, but more about how do I get all of these parts talking to each other nicely.
Should I just have the front end send the prompts to the back end and occasionally ping it for results? Or should I have the front end push things to something like a postgres database, have it watch the DB for results, while the back end also looks for changes in the database, and manages the queue based on the changes it sees? Or is there something completely different that people use for use cases like this?
Thanks for reading my ramblings, any help will be greatly appreciated!
10 replies