R
RunPod•11mo ago
szeth4401

Skypilot + Runpod: No resource satisfying the request

Hi team. I'm trying to use Skypilot + vllm+ Runpod to serve a custom trained LLM. I cannot make the skypilot to launch a resource. I get the following error: I 02-22 00:16:32 optimizer.py:1206] No resource satisfying <Cloud>({'NVIDIA RTX A6000': 1}, ports=['8888']) on RunPod. sky.exceptions.ResourcesUnavailableError: Catalog does not contain any instances satisfying the request: I tried numerous GPU ids and none worked. Please see below my skypilot yaml file. service: readiness_probe: /v1/models replicas: 1 resources: ports: 8888 accelerators: {NVIDIA RTX A6000:1} <-- have tried A10G, A100, etc, nothing works. setup: | conda create -n vllm python=3.9 -y conda activate vllm pip install vllm run: | conda activate vllm python -m vllm.entrypoints.openai.api_server \ --tensor-parallel-size $SKYPILOT_NUM_GPUS_PER_NODE \ --host 0.0.0.0 --port 8888 \ --model mistralai/.... What am I doing wrong? Thanks
2 Replies
szeth4401
szeth4401OP•11mo ago
yes strange you havent heard of them, they claim they support runpod haha oh ok, thanks for the interest 😉 have a look at their page to get an idea 🙂 i know serverless, using it for my other projects
justin
justin•11mo ago
Just deleting my messages to not clog up your question: But I think skypilot prob reading their docs would be launching gpus for u through gpu pod rather than serverless. If that is what u want then go for it~ But for serverless I guess then it doesnt seem would be supported, and prob better just let runpod manage for u and deploy as a vllm worker instead
Want results from more Discord servers?
Add your server