dynafire
dynafire
RRunPod
Created by dynafire on 8/14/2024 in #⛅|pods
Multiple containers on a single GPU instance?
I'm actually interested in serverless too but in my testing, it didn't really quite work the way I hoped it would. might do some more testing later.
12 replies
RRunPod
Created by dynafire on 8/14/2024 in #⛅|pods
Multiple containers on a single GPU instance?
You can't argue both sides, if I can already do it now by writing my app/container differently, then it isn't an impact to RP's business concerns. And I can already do it now, it's just less clean from a code and orchestration perspective.
12 replies
RRunPod
Created by dynafire on 8/14/2024 in #⛅|pods
Multiple containers on a single GPU instance?
It's really more of a convenience and cleanliness, because the whole point you're making which I'm also making is - if you are leaving the GPU idle, the only gain is programming/orchestration simplicity. If I can run multiple docker containers attached to the same GPU, that is a preferable programming/orchestration approach in some cases, while having zero impact to RP's business concerns. I'm not magically creating more capacity out of nowhere, I'm just using a different way of arranging the workload.
12 replies
RRunPod
Created by dynafire on 8/14/2024 in #⛅|pods
Multiple containers on a single GPU instance?
What I am describing does indeed bear some resemblance to docker in docker, and it would theoretically be one way of solving this, but it's not the only way (and also I'm pretty doubtful GPU passthrough works under docker in docker). It should conceptually be possible to schedule multiple docker instances onto the same GPU, and should not be difficult to do if the system was designed to support it from the start. But I can imagine it's not easy to retrofit into the system if there is an implicit assumption of 1 docker container to 1 GPU. I am still interested to hear if there's any plans to support this at some point.
12 replies