Recurring Job behavior with fly
Fly.io scales apps down or up when needed. If there's no usage for an app it scales down to 0 machines. How does this work with recurring/scheduled jobs?
29 Replies
If I schedule a job to run at a specific time, and the app is scaled down to 0 machines at that time, will the job still run?
Obviously there will be some delay due to the cold start, but will it run after the app is scaled back up?
I haven't thought about that. I believe it won't run 🤨 you should configure your app to have at minimum 1 instance running. @martinsos food got thought
Is setting the minimum instance to 1 for the server enough? Or does it need to be set for the db aswell? I'm not familiar w pgboss but I think it uses the db to store the jobs correct?
Yep, but the DB is just the "queue" mechanism but the server is doing the requests to see if there are jobs to process 😊 https://github.com/timgit/pg-boss/blob/master/src/worker.js
GitHub
pg-boss/src/worker.js at master · timgit/pg-boss
Queueing jobs in Node.js using PostgreSQL like a boss - timgit/pg-boss
So, I would assume the server is enough to keep it ticking 😃
@Heisenberg what you are asking is, if jobs that couldn't run because server was down, will run after the server gets back on some time later?
I have to admit I am not sure off the bat, but this will depend on how pgboss handles things, so I would recommend checking their docs regarding that specific behaviour.
Is db needed -> yeah, pgboss uses the db to store the jobs, so you will need db up for pgboss to work.
Question though: why would you not have at least 1 instance of server and 1 instance of db running? Normally, you will want your app to have at least 1 instance of client, 1 of server, and 1 of the db, then it all works normally.
But if it can't learn from db that there is a job to execute, I guess it won't execute it, right?
When a user tries to access the website, say
x.com
, fly automatically scales it up and starts a new machine to serve the request
and if x.com
sends a request to the server, only then a server machine is started up
For most applications that slight delay during the cold start doesn't matter, so they can scale down to zero@Heisenberg oh seriously, Fly does that? I was pretty sure my Fly app was running all the time, at least the server -> I mean I don't want it scaling down my server, my server is doing stuff (e.g. jobs)! This setup you are talking about, is this some kind of special setup on Fly, do you have any docs you can link to?
I'm not sure if its the default setup as the fly.toml is automatically created by Wasp. Wasp sets the minimum instances to zero.
This is the default fly setup as far as I am aware 😊
I've set the min_machines_running to 1 now, and it stays up
if jobs are running maybe they run even if the server is scaled down to 0?
Oh man yeah this doesn't look great for jobs hm! I think default should be different, it should be 1 being minimum
Yeah that would be better, I was confused why my jobs weren't running 😅
but now I'm confused on how yours are haha
The question is, how does Fly determine when to stop the server -> when activity is low? Because for the server serving the client, it is ok if it is stopped, and db I don't think will be stopped (I doubt it has this same setting in toml? I should check), but for the server we don't want it to stop.
Btw the .toml you shared is for client, maybe one for server doesn't have min_machines_running set to 0?
Oh my bad, I was editing it to set it to 1 must've messed that up. The default is zero though yes
https://community.fly.io/t/setting-a-minimum-number-of-instances-to-keep-running-when-using-auto-start-stop/12861
Found a forum post confirming default is set to 0
Fly.io
Setting a minimum number of instances to keep running when using au...
We now support setting a minimum number of machines to keep running when using the automatic start/stop feature for Apps v2. This will prevent the specified number of machines from being stopped. Update your flyctl to the latest version and then in your fly.toml [[services]] auto_start_machines = true auto_stop_machines = true min_machine...
Wohooo @Heisenberg, you just became a Waspeteer level 2!
Ok thanks @Heisenberg -> I would put server to 1 for sure, client and database might be ok with 0 if Fly is smart enough to know how to turn them on when a request comes.
We will take a deeper look into this in the following days and ensure we have these minimums properly set by default!
Thanks for the quick response!
Only issue with this is that it's a little pricey for hobby projects haha
Usually with the scaling down you don't have to pay for the full month usage and comes to much lesser
Yeah you are right, that sucks a bit, but since we are allowing people to run whatever they want on server, including jobs and whatever code they want really, meaning it is proper server and not serverless execution, we should by default be assuming something is always running on the server.
However, if you know you don't have anything running, no jobs or anything, you could put that minimum to 0.
Yep makes sense
Ok, created an issue for it, will solve it soon!
How is it going for you so far @Heisenberg , any other hiccups, besides seeding in production? Anything you like, dislike, could be better?
Pretty intuitive, ran into some issues with missing node packages etc and some installation steps were missing from the docs but I could figure those out
Most recent one being the commander package was missing from my wasp installation, so I had to manually install it to wasp before being able to deploy
The starter templates and examples are really helpful, whoever worked on those did a great job 😁
Ha yes, @Vinny (@Wasp) and also @miho did most of the work on templates and examples, so they will be happy to hear this!
Missing node packages and missing installation steps -> if you can remember or replicate any of these, please do let us know, as we would love to make sure this doesn't happen for others! Also, what do you mean by commander package missing from your wasp installation -> that was required of you by
wasp deploy
? The commander
package at the OS/system level?I'll create an issue in the future if I run into anything!
About the commander issue, my bad I phrased that really badly. What I meant was the output at
.wasp
was missing the commander
package
Which was a requirement to deploy
So I had to cd into /.wasp/out/server
and npm i commander
commander -> ok but wait that is also quite weird, isn't it? Can you tell me, who required
commander
-> wasp when you tried to do wasp deploy
?I'm not sure, this was for the embeddings starter, I hadn't made any changes myself
Ok interesting, we will inspect this then, thanks!