Hi, I'm new to runpod and try to debug this error

Failed to return job results. | 400, message='Bad Request', url='https://api.runpod.ai/v2/ttb9ho6dap8plv/job-done/qlj0hcjbm08kew/5824255c-1cfe-4f3c-8a5f-300026d3c4f5-e1?gpu=NVIDIA+RTX+A4500&isStream=false'
Failed to return job results. | 400, message='Bad Request', url='https://api.runpod.ai/v2/ttb9ho6dap8plv/job-done/qlj0hcjbm08kew/5824255c-1cfe-4f3c-8a5f-300026d3c4f5-e1?gpu=NVIDIA+RTX+A4500&isStream=false'
Is there any way to fetch more log details than this? I learned that the /logs endpoint is only for pods. Also, it would help tremendously if I could debug the Docker image exactly as is locally. So far I tested it by running rp_handler.py, which obviously did not trigger all issues.
3 Replies
wrichert
wrichertOP3d ago
Some more info on this - it is a modified version of https://github.com/runpod-workers/worker-faster_whisper/tree/main. I now tried to create a vanilla Docker image from that repo (just reduced the number of models) and deployed it as a new serverless endpoint. Same result:
--- Starting Serverless Worker | Version 1.7.7 ---

{"requestId": null, "message": "Jobs in queue: 1", "level": "INFO"}

{"requestId": null, "message": "Jobs in progress: 1", "level": "INFO"}

{"requestId": "73d2aa78-8b80-4227-bf88-4cc7df6de1c8-e1", "message": "Started.", "level": "INFO"}

{"requestId": "73d2aa78-8b80-4227-bf88-4cc7df6de1c8-e1", "message": "Failed to return job results. | 400, message='Bad Request', url='https://api.runpod.ai/v2/foo1p6tt05f61e/job-done/tzwdwi0qeb147v/73d2aa78-8b80-4227-bf88-4cc7df6de1c8-e1?gpu=NVIDIA+RTX+A4500&isStream=false'", "level": "ERROR"}
--- Starting Serverless Worker | Version 1.7.7 ---

{"requestId": null, "message": "Jobs in queue: 1", "level": "INFO"}

{"requestId": null, "message": "Jobs in progress: 1", "level": "INFO"}

{"requestId": "73d2aa78-8b80-4227-bf88-4cc7df6de1c8-e1", "message": "Started.", "level": "INFO"}

{"requestId": "73d2aa78-8b80-4227-bf88-4cc7df6de1c8-e1", "message": "Failed to return job results. | 400, message='Bad Request', url='https://api.runpod.ai/v2/foo1p6tt05f61e/job-done/tzwdwi0qeb147v/73d2aa78-8b80-4227-bf88-4cc7df6de1c8-e1?gpu=NVIDIA+RTX+A4500&isStream=false'", "level": "ERROR"}
Could it be that I'm using podman for this (but with --format docker) from Mac (but with --platform linux/amd64)?
Jason
Jason3d ago
maybe, you could try it or there is some problems with the whisper repo's / its deps if you try, using the nomal image that is attached in the readme.md (the one they've buildd) is it working?
wrichert
wrichertOP2d ago
No, the normal repo as of now actually does not work. I've seen that faster-whisper has been downgraded from 1.1.0 to 0.10.0 - I guess because of the CuDNN issues. However, I managed to fix it: https://github.com/runpod-workers/worker-faster_whisper/pull/53 -- with this I built fivetwosix/worker-faster_whisper:0.0.0 which works fine as a serverless worker.
GitHub
Upgrading faster_whisperer to 1.1.1 and a bunch of other fixes to t...
Switches to latest faster-whisperer, which in turn requires nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04 as the Docker base. Also pulled in a fix from runpod-python, which is not yet merged - othe...

Did you find this page helpful?