flash-singh
RRunPod
•Created by blue whale on 2/11/2025 in #⚡|serverless
Job stuck in queue and workers are sitting idle
this usually mean the worker isn't picking up the job, have endpoint id or anything else to look on our end?
36 replies
RRunPod
•Created by dreamingearthman on 2/17/2025 in #⚡|serverless
Seems like my serverless instance is running with no requests being processed
what do the logs or status give you?
7 replies
RRunPod
•Created by Sven on 1/30/2025 in #⚡|serverless
Whitelist IP Addresses
its already live
5 replies
RRunPod
•Created by EMPZ on 12/16/2024 in #⚡|serverless
GitHub integration: "exporting to oci image format" takes forever.
we have switched to new builder, so far we are seeing 2-3x faster builds
25 replies
RRunPod
•Created by JohnDoe on 2/12/2025 in #⚡|serverless
Pulling from the wrong cache when multiple Dockerfiles in same GitHub repo
thats possible @PRB
24 replies
RRunPod
•Created by Edo do di on 2/12/2025 in #⚡|serverless
Severless confusion
its automatically supported as long as you use something like vllm, check our docs on how to use it
2 replies
RRunPod
•Created by JohnDoe on 2/12/2025 in #⚡|serverless
Pulling from the wrong cache when multiple Dockerfiles in same GitHub repo
will have eng look at this
24 replies
RRunPod
•Created by zaid on 2/11/2025 in #⚡|serverless
What is expected continuous delivery (CD) setup for serverless endpoints for private models?
our long term path is we are introducing
model store
which can pull public and private models from huggingface and store them locally on servers with faster access rather than in network storage; s3 support may be further down the road7 replies
RRunPod
•Created by zaid on 2/11/2025 in #⚡|serverless
What is expected continuous delivery (CD) setup for serverless endpoints for private models?
for a quick win we can get you programmatic way to update creds, along with updating tag version
7 replies
RRunPod
•Created by Yobs on 2/1/2025 in #⚡|serverless
Max image github repo serverless intergration can take?
what are you putting in network volume? whats causing the delay time? make sure to have at least 3 or more workers; rolling releases won't impact your current workload so even with a new release, some workers will handle the workload using current image until new container image is loaded
17 replies
RRunPod
•Created by Yobs on 2/1/2025 in #⚡|serverless
Max image github repo serverless intergration can take?
yes that would be faster if you were adding model into the container image, its a huge reduction to size
17 replies
RRunPod
•Created by digger18 on 2/6/2025 in #⚡|serverless
"worker exited with exit code 1" in my serverless workloads
can you share endpoint id in pm
5 replies