Teddy
Teddy
RRunPod
Created by Teddy on 3/19/2025 in #⛅|pods
vLLM and multiple GPUs
Serve the model in different independents gpus directly?
15 replies
RRunPod
Created by Teddy on 3/19/2025 in #⛅|pods
vLLM and multiple GPUs
And what do you recommend to get, at least, 4.000 rpm?
15 replies
RRunPod
Created by Teddy on 3/19/2025 in #⛅|pods
vLLM and multiple GPUs
It would be nice to inform before reservations
15 replies
RRunPod
Created by Teddy on 3/19/2025 in #⛅|pods
vLLM and multiple GPUs
So, I need to reserve for example 4xL4 and then check it. If not, I should contact directly with support team?
15 replies
RRunPod
Created by Teddy on 3/19/2025 in #⛅|pods
vLLM and multiple GPUs
Any way to know if the machines have the NCCL?
15 replies