yasyf
yasyf
RRunPod
Created by yasyf on 1/18/2025 in #⚡|serverless
Can we get our serverless worker limit increased?
Up to 50 or so! I also emailed
5 replies
RRunPod
Created by yasyf on 12/12/2024 in #⚡|serverless
Constantly getting "Failed to return job results."
{5 items
"endpointId":"mbx86r5bhruapo"
"workerId":"r23nc1mgj01m13"
"level":"error"
"message":"Failed to return job results. | 400, message='Bad Request', url='https://api.runpod.ai/v2/mbx86r5bhruapo/job-done/r23nc1mgj01m13/986b42c0-cde3-4732-9624-d0166e9f01bf-u1?gpu=NVIDIA+L40&isStream=false'"
"dt":"2024-12-12 09:29:31.43862592"
}
{5 items
"endpointId":"mbx86r5bhruapo"
"workerId":"r23nc1mgj01m13"
"level":"error"
"message":"Failed to return job results. | 400, message='Bad Request', url='https://api.runpod.ai/v2/mbx86r5bhruapo/job-done/r23nc1mgj01m13/986b42c0-cde3-4732-9624-d0166e9f01bf-u1?gpu=NVIDIA+L40&isStream=false'"
"dt":"2024-12-12 09:29:31.43862592"
}
4 replies
RRunPod
Created by yasyf on 9/26/2024 in #⚡|serverless
524 Timeouts when waiting for new serverless messages
After my async python serverless handler finishes one request, I then start getting these on that box:
2024-09-26T22:11:55.344188433Z {"requestId": null, "message": "Failed to get job, status code: 524", "level": "ERROR"}
2024-09-26T22:11:55.344188433Z {"requestId": null, "message": "Failed to get job, status code: 524", "level": "ERROR"}
This seemingly prevents the auto-shutdown after N seconds from happening, so our runners stay up forever. One example is zpatg26htp69og.
11 replies