RunPod

R

RunPod

We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!

Join

⚡|serverless

⛅|pods

30 minutes pending in serverless

wasnt liket this yesterday
No description

Is there a maximum Runtime?

Hi, when I try running a job on my handler locally, everything works just fine, the job runs for about 12 minutes. However when I test my job with a serverless worker, after around 10 minutes, my job fails just in the middle of processing without throwing any error and the worker gets killed. Is there a maximum time a worker can run a Job? I could not find anything related to this in the docs.

EUR-IS datacenter blacklisted by Elevenlabs?

I have a strange issue happening since yesterday, my serverless instance could not establish a wss communication with Elevenlabs API, it is throwing a 403 issue with the following link: https://help.elevenlabs.io/hc/en-us/articles/22497891312401-Do-you-restrict-access-to-the-service-and-platform-for-any-specific-countries Not sure if this is specific to Runpod or Elevenlabs but when I changed the datacenter to EUR-RO, the issue disappeared....

queue delay times

Hi , I'm seeing really long delay times . even though there's nothing in the queue , and this is a really small CPU serverless endpoint . Any idea what causes this ?
No description

serverless qwen-audio model deployment, can't see any error, getting workers exited with exit code 1

I have setuped the worker-template for processing some audio files with qwen-2-audio-7B instruct. the image build was sucessfull, but when i am making a request with my inputs, it is not changin the status of my input in the queue and also showing worker exited with exit code 1 in logs. Can't find what i am doing wrong. Please help!!!

How to Speed Up S3 Upload or Make it Async in RunPod Serverless Deployments

I am currently exploring using RunPod as our primary in-house model deployment platform instead of Replicate (our current preferred platform). Our in-house models mostly are txt2img/img2img custom models. One of the issues I'm facing while testing RunPod is long S3 upload times. For example, for one of our processes, the prediction time is ~1 second, but the S3 upload is taking up to 4-5 seconds (depending on image size), significantly increasing the overall prediction time. This causes two main problems:...

output is undefined on response

Hello, i am running the serverless endpoint and I get a return in the console for a request made on the site, but when i use the sdk with the runSync function it does not give me an output. Instead it just says that it succeds and is completed but no output object is present on the response. here is the response printed as a table
No description

Locally testing a worker where the consuming code relies on the job ID

Hi there, I'm working on a codebase where a RunPod worker is used to execute a workload that takes ~40 seconds, and then the result is sent to the webhook with the job ID and state. I have been attempting to test this worker locally for faster iteration, but I've been discovering that RunPod's development workers seem to have discrepancies with the production workers, and that these aren't really documented....

Multiple models in a single serverless endpoint?

Hi everyone, I was wondering if this is possible since the environment variable seems to suggest that it's something that's supported. as well as the fact that you have to mention the model name when posting a request....
No description

Custom nodes comfyui serverless

Hi guys, I'm trying to setup a workflow with custom nodes, but keep running into an error: Generation error: Error: Job failed: "Error queuing workflow: HTTP Error 400: Bad Request" ...

Keeping idle workers alive even without any requests.

Hey everyone, does anyone have a clear understanding of how idle timeout works on RunPod? It seems like billing is based on max workers by default. For instance on this deployment I set 5 max workers, 0 active workers, and an idle timeout of 5 seconds, but even with no requests, I still see 3 idle workers. Is this expected behavior, or is something off?...
No description

How can I validate the network storage is being used for my serverless endpoint?

I download large files after initialisation using huggingface-cli, it seems that I see no speed improvement from enabling/disabling network storage despite the network storage being connected in the UI, I also get not 'mount message'

No Logs when build Failed

There are not logs shown how am i supposed to find out why it failed
No description

Help with deploying WhisperX ($35 bounty)

I've been trying to get WhisperX to run on runpod serverless. Here is what I have so far: https://github.com/YashGupta5961/whisperx-worker . The worker deploys but its running into some problems processing the request. I cant seem to debug whats going wrong. I am willing to offer $35 USD to anyone who can get it working with diarization. I know its not much but I hope it can be motivating to bang their head against the wall for me 😄...

How can I connect my code to runpod gpu with api

How can I connect my code to runpod gpu with api

do we get billed partially or rounded up to the second?

If my execution time is 0.35 seconds, will I get billed 1 second for that request or partially?

Max workers increase

Hi we are planning a production launch, currently using serverless setup. We see max workers is 5 right now, and if we have a balance of 100 we can increase to 10. I want to understand what is the process of increasing lets say to 20 or 100 in the future?

Runpod workers getting staggered when I call more then 1 at a time.

So i'm currently running an connected to the endpoint, and I've noticed that the workers tend to be deployed in a staggered way. That is I have a function that is splitting a workload into 50 runpod jobs. However I've noticed that for some reason, my endpoint does not actually use all 50 workers that I have that are ready. Instead it seems like the workers are getting staggered deployed that is I'll see that 36 of the jobs went through and are running and i still have 14 jobs in queue while I h...

Feb 20 - Serverless Issues Mega-Thread

Many people seem to be running into the following issue: Workers are "running" but they're not working on any requests, and requests just sit there for 10m+ queued up without anything happening. I think there is an issue with how the requests are getting assigned to the workers: there a number of idling workers, and there are a number of queued requests, and they both stay in that state for many minutes without any requests getting picked up by workers! ...

Default Execution time out

In the docs it say that all serverless endpoints have a 10 min default execution time out. We have had few instances that the job is stuck in processing for hours. Are the docs incorrect and we need to set the execution timeout manually?
Next