Reporting/blacklisting poorly performing workers
I've noticed that every now and then a bad worker is spawned for my endpoint which takes forever to complete the job when compared to other workers running the same job. Typically my job would take ~40s but there are occassionally workers that have the same gpu but take 70s instead. I want to blacklist these pods from running my endpoint so performance isnt impacted
18 Replies
Do you have an endpoint id for it?
so you're running a specific exact same job that should take the same time but instead on another bad worker it takes longer?
i guess its best to create ticket to runpod, so they can check on their side
@1AndOnlyPika
Escalated To Zendesk
The thread has been escalated to Zendesk!
Yes endpoint
8ba6bkaiosbww6
also the delay times are very inconsistent, some take 20s to start even with flashboot on
my regular cold start time is 5sThe same job, on the same machine is US-OR-1
same, would love if you could specify in request if a request should not be directed to a worker id
I have a retry mechanism when executionTimeout happens, but then most of the times the job goes back to the same worker id : |
I've found that US-OR-1 location always has issues, whether its a slower worker, or workers that have broken gpus that wont even start the container. going to be removing it from the allowed locations for the time being
:dead:
hey runpod, is there maybe a way to report these broken servers and have them fixed?
I have the same issue, I really need a api to blacklisting these bad workers
I am also experiencing this issue
I made a ticket on runpod's website, I told them that several of our companies really need to solve this problem. If they reply, I will synchronize it to the group.
yea, there should at least be some simplified way for us to report a worker and have somebody investigate&fix it
Is it that often
well, not very often maybe 10% but still annoying to deal with
Ah yea of this shouldn't happen that often in good condition
Not having the same issue but the request is almost the same. Sometimes a worker fails because of internal errors and misconfiguration, out of space because of memory purging errors and etc (happens in less than 1%). Unfortunately when this worker is in the batch all tasks going to the worker - fails. So it is always a job to be done by hand kicking that worker out of endpoint to stop errors. Definitely need an API to kick unhealthy workers from endpoints!
need a way to get someone on the team looking into the specific worker - otherwise it may be reallocated even after being deleted