inc3pt.io
inc3pt.io
RRunPod
Created by inc3pt.io on 2/26/2025 in #⚡|serverless
EUR-IS datacenter blacklisted by Elevenlabs?
That's what I ended up doing. But had a hard time figuring out the issue. Problematic datacenter was EUR-IS
4 replies
RRunPod
Created by Yebs on 2/26/2025 in #⚡|serverless
30 minutes pending in serverless
No description
8 replies
RRunPod
Created by inc3pt.io on 12/18/2024 in #⚡|serverless
Reusing containers from Github integration registry
Ok, thank you
3 replies
RRunPod
Created by inc3pt.io on 12/13/2024 in #⚡|serverless
Git LFS on Github integration
I understand, thanks you guys for the effort
14 replies
RRunPod
Created by inc3pt.io on 12/13/2024 in #⚡|serverless
Git LFS on Github integration
My endpoint is now using a temporary fix, the model is being downloaded from gdrive not git lfs
14 replies
RRunPod
Created by inc3pt.io on 12/13/2024 in #⚡|serverless
Git LFS on Github integration
No description
14 replies
RRunPod
Created by inc3pt.io on 12/13/2024 in #⚡|serverless
Git LFS on Github integration
I will try that
14 replies
RRunPod
Created by spooky on 10/30/2024 in #⚡|serverless
jobs queued for minuets despite lots of available idle worker
Same in 1.7.4. no problem with 1.7.0
21 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
@yhlong00000 I have updated Runpod again today to 1.7.4 to see if the issue of only one process going from IN_QUEUE to IN_PROGRESS still persists. It still produces the same issue: Only one process can be processed at a time. My processes are long running. Endpoint: j2j9d6odh3qsi3 / 1 worker set with Queue Delay as scaling strategy. Note: I was using version 1.7.0 and no issue at all here.
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
I will retry it later today and let you know how it goes, thanks.
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
Reverted to v1.7.0, I find this version more efficient compared to 1.6.2
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
@yhlong00000 I am still having issues with 1.7.4: - Only one request converts to IN_PROGRESS, all others stay in IN_QUEUE. Even though it can accept multiple request counts and there are available workers in idle state. Tasks are long running. Worker ID for you to debug: ddywfiz37lbsaz - Also, maybe a webUI bug but I also had instances of jobs still appear IN_PROGRESS in the webUI with the corresponding workers no more active (workers: jwesqcl6bb0194 and 1t9jehqvp73esy) - I also had one instance of 400 Bad request with another endpoint: 2024-10-27T12:04:05.295467963Z {"requestId": "f277cfe0-45e1-4187-90f0-15abb69348c3-u1", "message": "Failed to return job results. | 400, message='Bad Request', url=URL('https://api.runpod.ai/v2/c2b******f1mf/job-done/jwesqcl6bb0194/f277cfe0-45e1-4187-90f0-15abb69348c3-u1?gpu=$RUNPOD_GPU_TYPE_ID&isStream=false')", "level": "ERROR"}
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
FYI - we have reverted to 1.7.0 as we have noticed that 1.6.2 has a lower FPS (we are processing frames in real time).
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
Okay I'll revert to 1.6.2 then, thanks for the info
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
1.7.2 had other issues that were worse IMO like freezing requests that would just fill up the request queue I am thinking of going back to 1.6.2, that was the last good working one for me I feel like the runpod-python SDK is not being actively developed, issues are persisting for too long @yhlong00000 can you please step up in here?
27 replies
RRunPod
Created by vitalik on 10/10/2024 in #⚡|serverless
Job retry after successful run
Same issue, started to happen with 1.7.3
27 replies
RRunPod
Created by Ethan Blake on 10/10/2024 in #⚡|serverless
Why too long delay time even if I have active worker ?
@yhlong00000 when is the release of 1.7.3 planned?
9 replies
RRunPod
Created by Xqua on 10/10/2024 in #⚡|serverless
Some serverless requests are Hanging forever
@yhlong00000 request ids are not available anymore. Another weird behavior that happened at the same time was that all Worker Configurations were showing up as unavailable.
5 replies
RRunPod
Created by Xqua on 10/10/2024 in #⚡|serverless
Some serverless requests are Hanging forever
Same, weird behaviour on stalling requests yesterday night GMT. Happened on CPU.
5 replies
RRunPod
Created by Kyle McDonald on 10/5/2024 in #⛅|pods
Is it possible to run a WebRTC server on a pod?
It is possible. You don't need the UDP ports for the WebRTC session, you only need to send back the server SDP after the offer SDP is received from a client. Just have your TURN server somewhere else if you're going to have your own.
8 replies