flash-singh
flash-singh
RRunPod
Created by AEmirhan on 1/21/2025 in #⛅|pods
Runpod storage configuration
this will depend on your use case, you can store tar file in network storage, and then extract in local container disk for actual work, that way network storage is looking at 1 or few files instead of millions
7 replies
RRunPod
Created by blue whale on 2/11/2025 in #⚡|serverless
Job stuck in queue and workers are sitting idle
this usually mean the worker isn't picking up the job, have endpoint id or anything else to look on our end?
36 replies
RRunPod
Created by dreamingearthman on 2/17/2025 in #⚡|serverless
Seems like my serverless instance is running with no requests being processed
what do the logs or status give you?
7 replies
RRunPod
Created by Sven on 1/30/2025 in #⚡|serverless
Whitelist IP Addresses
its already live
5 replies
RRunPod
Created by EMPZ on 12/16/2024 in #⚡|serverless
GitHub integration: "exporting to oci image format" takes forever.
we have switched to new builder, so far we are seeing 2-3x faster builds
25 replies
RRunPod
Created by JohnDoe on 2/12/2025 in #⚡|serverless
Pulling from the wrong cache when multiple Dockerfiles in same GitHub repo
thats possible @PRB
24 replies
RRunPod
Created by Edo do di on 2/12/2025 in #⚡|serverless
Severless confusion
its automatically supported as long as you use something like vllm, check our docs on how to use it
2 replies
RRunPod
Created by JohnDoe on 2/12/2025 in #⚡|serverless
Pulling from the wrong cache when multiple Dockerfiles in same GitHub repo
will have eng look at this
24 replies
RRunPod
Created by Aayush999 on 2/10/2025 in #⛅|pods
Docker image from Docker hub
runpod only supports amd64, not arm64
15 replies
RRunPod
Created by Aayush999 on 2/10/2025 in #⛅|pods
Docker image from Docker hub
make sure to add auth if its a private image
15 replies
RRunPod
Created by Igor Gulamov on 2/11/2025 in #⛅|pods
deepseek-r is loading for >1h into vram.
dont use network storage to load the models, instead move them to container disk or pod volume disk, see if that loads them any faster
16 replies
RRunPod
Created by zaid on 2/11/2025 in #⚡|serverless
What is expected continuous delivery (CD) setup for serverless endpoints for private models?
our long term path is we are introducing model store which can pull public and private models from huggingface and store them locally on servers with faster access rather than in network storage; s3 support may be further down the road
7 replies
RRunPod
Created by zaid on 2/11/2025 in #⚡|serverless
What is expected continuous delivery (CD) setup for serverless endpoints for private models?
for a quick win we can get you programmatic way to update creds, along with updating tag version
7 replies
RRunPod
Created by Yobs on 2/1/2025 in #⚡|serverless
Max image github repo serverless intergration can take?
what are you putting in network volume? whats causing the delay time? make sure to have at least 3 or more workers; rolling releases won't impact your current workload so even with a new release, some workers will handle the workload using current image until new container image is loaded
17 replies
RRunPod
Created by Yobs on 2/1/2025 in #⚡|serverless
Max image github repo serverless intergration can take?
yes that would be faster if you were adding model into the container image, its a huge reduction to size
17 replies
RRunPod
Created by wuxmes on 2/7/2025 in #⛅|pods
Maintanence
what datacenter is that?
7 replies
RRunPod
Created by digger18 on 2/6/2025 in #⚡|serverless
"worker exited with exit code 1" in my serverless workloads
can you share endpoint id in pm
5 replies
RRunPod
Created by wuxmes on 2/7/2025 in #⛅|pods
Maintanence
yes start is the time it should be taken down, it might lag behind schedule if you can still run it, otherwise its likely offline
7 replies
RRunPod
Created by ivan.prosperi on 2/7/2025 in #⛅|pods
Choose CPU model on Pods
to your actual question, we select cpus based on cpu speed, you do not get to select the actual model of the cpu, cpu3 is core speeds up to 3ghz
8 replies
RRunPod
Created by ivan.prosperi on 2/7/2025 in #⛅|pods
Choose CPU model on Pods
CPU5 is the best if you want best core performance, upto 5ghz
8 replies