R
RunPodβ€’12mo ago
Peps

Throttled

Hey πŸ™‚ Is there something I can do to prevent getting throttled? I see the availability for the GPU I selected is high, and I'm also not using any network disk, so I'm a bit confused what's exactly happening. ID: ofjdhe4djh1k5t
13 Replies
Peps
PepsOPβ€’12mo ago
ended up having to manually terminate the throttled worker, and after that it automatically spun up another worker that got my requests unstuck. Not sure why I had to do this manually to get my queue unstuck
Justin Merrell
Justin Merrellβ€’12mo ago
Increase the max number of workers you have set.
Peps
PepsOPβ€’12mo ago
in my use case I only ever want to run one worker at the same time, as I will be running a task about every 30 seconds that's not time sensitive. I wouldn't mind if a throttled worker that's stuck for several minutes gets automatically terminated and spins up another
justin
justinβ€’12mo ago
I think increase the maximum amount of workers, but set the queue time under advance for like 1 minute or something you can set the time it takes before a worker spins up a different worker that way u only pay for active workers There just some weird behaviors with a small maximum worker
Peps
PepsOPβ€’12mo ago
ah yeah that sounds good. In my case I'm not really doing anything time-sensitive, so I'll probably just set it at like 5 minutes so my queue is not stuck forever. Still would be nice if workers that are throttled for a long time are automatically terminated, but yeah this should work for now. Thanks!
Peps
PepsOPβ€’12mo ago
bit confused what's happening here now
No description
Peps
PepsOPβ€’12mo ago
my queue is empty for a while now, and I see this my idle timeout is low and active workers 0, so I don't quite understand what it's trying to do, with my queue already being empty for more than 5 minutes or so
justin
justinβ€’12mo ago
xD yea dont worry about it Runpod adds an additional 3 workers in the background to help with scaling issues but u arent paying for it nor count against ur limit i also see the same thing i wish runpod explained whatever is going on here more but never had an issue with it
Peps
PepsOPβ€’12mo ago
ah alright I didn't have this before when sticking with 1 max worker, so got a bit confused when seeing this after increasing the max to 2
justin
justinβ€’12mo ago
Yeah - is a weird thing they do when ur greater than 1 worker max. 🀷, wish was something documented / explained but so be it https://discord.com/channels/912829806415085598/1185822418053386340 I wish for it too
Peps
PepsOPβ€’12mo ago
Utilizing network storage if your docker image is 20+ GB
oh uh my docker image is definitely not 19GB πŸ™ˆ it's incredibly bloated because I had to go through a lot of hoops to get GPU support working with my tensorflow for some reason. Currently still building my thing as a POC, so haven't really slimmed down the image yet
flash-singh
flash-singhβ€’12mo ago
max 1 worker we treat as development, and give you only 1 worker, most people want to ssh and debug, 2 or more are considered production workload and we add additional worker caches to help reduce throttled workers
justin
justinβ€’12mo ago
haha i think under 20 gb is actually ok i think once it exceeds like 30 gb / climbs around that regions gets bad
Want results from more Discord servers?
Add your server