R
RunPod•3w ago
jhappy

Serverless pod tasks stay "IN_QUEUE" forever

I have a TTS model that I've deployed flawlessly as a Runpod Pod, and I want to convert it to a serverless endpoint to save costs. Did an initial attempt, but when I send a request to the deployed serverless endpoint, the task just stays as "queued" forever. Last line of my dockerfile is
CMD ["python", "-u", "runpod.py"]
CMD ["python", "-u", "runpod.py"]
Contents of runpod.py:
import runpod
from api import handle

def handler(event):
print('In handler')
input = event['input']
return handle(
input.get("is_stream", False),
input.get("clip_id"),
input.get("refer_wav_path"),
input.get("prompt_text"),
input.get("prompt_language"),
input.get("text"),
input.get("text_language"),
input.get("cut_punc"),
input.get("top_k", 15),
input.get("top_p", 1.0),
input.get("temperature", 1.0),
input.get("speed", 1.0),
input.get("inp_refs", [])
)

if __name__ == 'main':
print('In runpod.py...')
runpod.serverless.start({'handler': handler})
print('started handler!')
import runpod
from api import handle

def handler(event):
print('In handler')
input = event['input']
return handle(
input.get("is_stream", False),
input.get("clip_id"),
input.get("refer_wav_path"),
input.get("prompt_text"),
input.get("prompt_language"),
input.get("text"),
input.get("text_language"),
input.get("cut_punc"),
input.get("top_k", 15),
input.get("top_p", 1.0),
input.get("temperature", 1.0),
input.get("speed", 1.0),
input.get("inp_refs", [])
)

if __name__ == 'main':
print('In runpod.py...')
runpod.serverless.start({'handler': handler})
print('started handler!')
Input:
{
"input": {
"clip_id": "12345",
"is_stream": false,
"refer_wav_path": "test_short.wav",
"prompt_text": "Reference text here",
"prompt_language": "en",
"text": "Generate this text!",
"text_language": "en"
}
}
{
"input": {
"clip_id": "12345",
"is_stream": false,
"refer_wav_path": "test_short.wav",
"prompt_text": "Reference text here",
"prompt_language": "en",
"text": "Generate this text!",
"text_language": "en"
}
}
Anyone know what might be going wrong? I am willing to pay a bounty if you can help me solve this issue. The container logs just print the CUDA notice repeatedly (appears to be turning it on and off). CPU Utilization is generally high. Not sure what I should do to debug.
7 Replies
nerdylive
nerdylive•3w ago
Is input.get('key') the same as using input['key'] Any logs? Try changing CMD with entrypoint(all caps)
BBAzn
BBAzn•3w ago
my A1111 worker for serverless does the same thing its been working for a few months but since last week its been broken not sure if its a server issue or the code because there wasnt any changes to the code just stuck at IN_QUEUE and just keeps running log says server not starting up retrying or something to that effect
jhappy
jhappyOP•3w ago
Figured out issue 1: mistyped if __name__ == 'main':, should be __main__ not main. Checking if this works now Whelp, no luck with that fix, it's still broken. In the logs I can now see "worker exited with code 1" a few times but no logs beyond that. Tho in one of the workers I saw "in runpod.py..." printed a couple times as it appeared to turn on and off. no "started handler!". Debugging guidance would be greatly appreciated
nerdylive
nerdylive•3w ago
Should be from runpod I guess
yhlong00000
yhlong00000•3w ago
from api import handle return handle(....) this function probably not working.
jhappy
jhappyOP•3w ago
Nope, that's not the issue. But I did find the real one: My file is called runpod.py. When I do import runpod it tries to import itself rather than the package runpod. Isn't python wonderful? 😛
nerdylive
nerdylive•3w ago
Ahh lel
Want results from more Discord servers?
Add your server