Cron Jobs and Celery Beat
I'm using celery to run a task daily at 1:30pm. The task takes about 15 minutes. the celery container is still running for the other 23 and a half hours that it does nothing, so I set up a cron job to launch it half an hour before the task starts. The result I expected was that the container would start up, wait for it's command from celery beat, run the task, and shut down again. However, it skipped every time, and I can't even access the logs on the skip to see what the issue was. Am I doing something wrong here?
Solution:Jump to solution
if you aren't running more than one task at the same time, try setting the concurrency to one.
but I don't think rewrite is the correct word, you would just be extracting the logic out and into a simple py file that does what it does and then exits....
11 Replies
Project ID:
e300a81b-ca41-47bc-87df-2c57174eb297
e300a81b-ca41-47bc-87df-2c57174eb297
celery is a long running process meaning it stays running 24/7 idle and then does the job at the set time, this is incompataible with railways cron as it expects your app to start and exit as fast as possible, otherwise if the job stays running all jobs after it will be skipped
Got it, thanks. The docs alluded to this but I was hopeful I was misunderstanding. Any suggestions? Would allowing the app to sleep actually boot it up when celery-beat kicks it into action?
nope it would stay sleeping forever and would never run jobs, a service can only be woke via external traffic.
the two options you have are -
- just run celery like normal, no railway schedular.
- rewrite the job to not use celery, make it just a simple py script that does the job and exits when its done so that you can use railway's schedular.
this brings up the question, how much cpu and mem is your celery service using to prompt you to look for cheaper ways to run it?
It's quite a bit of memory, more than the web service or anything else. I set the concurrency to two, maybe that's overkill for a simple task? It uses 450 MB all day, until it does the task and temporarily jumps into the 500's. The CPU is fine, and is 0 when the task isn't running.
I was hoping to not rewrite the task lol. May be the best option.
Solution
if you aren't running more than one task at the same time, try setting the concurrency to one.
but I don't think rewrite is the correct word, you would just be extracting the logic out and into a simple py file that does what it does and then exits.
Fair enough, it would be a relatively simple task. It's just an important email that goes out daily so it would entail a few rounds of testing to ensure it's running smoothly. I'll see how the usage looks after lowering the concurrency. THanks for your help, Brody!
happy to help!
Cut the memory down to 113 MB, incase you were curious. This is in line with my other containers. So long as the task goes well tomorrow, my problem is solved!
awesome glad to hear it was a simple solution