Jobs workers to consume from a queue
I want to set up a jobs worker - or multiple workers - to fetch updated information from the OpenAI API when certain events occurr. The pattern I'd like to use is an event queue, rather than invoking the jobs worker directly from the TS code.
Is this something possible with Wasp 0.15.0 or do I need to either to a cron job or stick with the exlicit trigger?
6 Replies
Hi @Sergio Visinoni. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave:
----------------------
Yes, Wasp does support background jobs that can be used to fetch updated information from the OpenAI API when certain events occur. You can set up a job worker to perform tasks in the background without blocking or affecting the execution of the rest of the app. This can be done using the
job
functionality in Wasp.
Here is an example of how you can declare a job in Wasp:
In this example, foo({ job: "args" })
gets automatically scheduled and invoked for you every hour. You don't need to invoke anything in your TypeScript code.
However, please note that Wasp currently only supports PostgreSQL as the database system for job executors. If you're using a different database system, you might need to switch to PostgreSQL.
Also, keep in mind that while Wasp does support recurring jobs, it doesn't seem to directly support an event queue pattern out of the box. You might need to implement this pattern yourself in your JavaScript or TypeScript code.
For more information, you can refer to the Wasp documentation on jobs (https://wasp-lang.dev/docs/advanced/jobs) and the Github issue on supporting background jobs (https://github.com/wasp-lang/wasp/issues/530).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Sorry @Sergio Visinoni, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist:
Perhaps try again in 15mins?@kapa.ai would it be possible to spawn a job / worker that is always running, listening to events on a certain postgres table to be used as an event queue?
Let me check the knowledge base for you. One moment! :books:
----------------------
Wasp currently does not directly support spawning a job or worker that is always running and listening to events on a PostgreSQL table. The job functionality in Wasp is designed for running tasks at specific intervals (using cron syntax) or on-demand, rather than continuously listening for events.
However, you could potentially implement this functionality yourself in your JavaScript or TypeScript code. You could create a job that periodically checks the PostgreSQL table for new events and processes them. Here's a basic example of how you might declare such a job:
In this example, the
checkEvents
function would be responsible for checking the PostgreSQL table for new events and processing them.
Remember that this is a workaround and might not be the most efficient solution for a large number of events or a high rate of incoming events. For more complex use cases, you might want to consider using a dedicated event queue or stream processing system.
For more information, you can refer to the Wasp documentation on jobs (https://wasp-lang.dev/docs/advanced/jobs).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Sorry @Sergio Visinoni, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist:
Perhaps try again in 15mins?@kapa.ai if I go with the suggested approach, are there mechanisms in place to ensure that there won't be multiple jobs running in parallel, which would require me to implement a locking logic on the queue?
Let me check the knowledge base for you. One moment! :books:
----------------------
The Wasp framework uses
PgBoss
as its job executor, which provides built-in support for handling job concurrency. This means that PgBoss
automatically ensures that the same job is not executed multiple times concurrently, so you don't need to implement your own locking logic.
However, please note that this only applies to the same job. If you have different jobs that need to access the same resources (like a database table), you might still need to handle concurrency at the resource level.
For more information, you can refer to the Wasp documentation on jobs (https://wasp-lang.dev/docs/advanced/jobs) and the PgBoss documentation (https://github.com/timgit/pg-boss/blob/8.4.2/README.md).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Sorry @Sergio Visinoni, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist:
Perhaps try again in 15mins?I think the bot gave a pretty good overview of our current limitations! We'll have to explore further if PgBoss is the ultimate async jobs solution for us. Your input is super valuable for us and prompts to think about more advanced uses cases for jobs / async processing / event based code.
If you could maybe share your solution at some point so we can understand what you did with the current limitations so we can get inspired for maybe new APIs or new bits of the Wasp framework in the future? 🙏
cc: @martinsos