R
Railway4mo ago
_mati

Wait until everything has completed its execution before deploying new container

Hey! I wanted to know if there's a way to wait until everything is done before stopping the execution of a container when I'm deploying a new one. I have some functions that execute other functions and the total time of execution of the original one may be around 5 to 7 minutes (they make a lot of external API calls) Sometimes I'm deploying a new version of my software and the progress of these functions are lost, because the container is stopped. I know this is currently happening (well, I can see both containers being up and running for a few seconds) but those few seconds are not enough. Sorry if this is a duplicate thread, I couldn't find a way to search what I'm looking for in the server :|
Solution:
If I understand you correctly yes I think it would be - You can set RAILWAY_DEPLOYMENT_DRAINING_SECONDS to however many seconds you think you'd need for the old deployment to finish jobs. then when you make a new deployment and it goes out railway will send sigterm to your old deployment and wait that amount of seconds until it force kills the container....
Jump to solution
7 Replies
Percy
Percy4mo ago
Project ID: N/A
_mati
_matiOP4mo ago
N/A Worth mentioning that it would be nice if the old container doesn't receive new requests while stopping its execution, which is happening right now if I'm not wrong
Brody
Brody4mo ago
do you think you could write out a little timeline of your ideal deployment pathway?
_mati
_matiOP4mo ago
yeah, so: 1. I have my container up and running. This container is processing some data and executing API calls that take some time to complete. 2. I make a change in my code and push it to master in Github. This triggers a new deploy in Railway. 3. Both containers (the old one and the new one) are now running in Railway. The old container is processing all the pending requests, and the new one is receiving all the new requests. Once the old container is done, it's removed. That way I don't lose any progress when updating my code :) now, my question is: is this currently possible in Railway or this is more like a feature request? lol My current workaround for this is to save the state of the requests in Redis and when the new deployment is running, start again from the last state fetching the Redis' data. But sometimes the old container is removed when a certain task is about to finish, leaving everything broken and unable to start again
Solution
Brody
Brody4mo ago
If I understand you correctly yes I think it would be - You can set RAILWAY_DEPLOYMENT_DRAINING_SECONDS to however many seconds you think you'd need for the old deployment to finish jobs. then when you make a new deployment and it goes out railway will send sigterm to your old deployment and wait that amount of seconds until it force kills the container. so your app has to capture sigkill, and delay exiting until all current tasks are done, while also not accepting any new jobs after sigkill was sent. that way your old deployment finishes its jobs and doesn't accept any new ones, and your new deployment is free to pick up the new jobs.
_mati
_matiOP4mo ago
good to know I won't have to wait for a new feature! :) my only question is: Railway automatically sends the traffic to the new deployment once it's deployed (and the old deployment is stopping its execution)? or will I have to develop this logic, so my old deployment sends the traffic to the new one?
Krøn
Krøn4mo ago
Railway automatically sends traffic to new deployment once it is successful
Want results from more Discord servers?
Add your server