Docker image using headless OpenGL (EGL, surfaceless plaform) OK locally, fails to CPU in Runpod
Hi all, I'm wondering if anyone can educate me on what would be causing this difference in behaviour when running a container locally versus in Runpod, and whether there is a solution.
In summary I'm trying to run a headless OpenGL program in a docker container, by using ELG with the surfaceless platform (https://registry.khronos.org/EGL/extensions/MESA/EGL_MESA_platform_surfaceless.txt). I was able to get the program working as intended in a container outside of Runpod. But once deployed to Runpod, it falls back to CPU processing.
As a minimal testcase, it's sufficient to simply run
eglinfo
, a utility which tells you what EGL devices are available. Outside of runpod multiple are available, but in Runpod none are. The testcase and example outputs are available here: https://github.com/rewbs/egldockertest .
Any ideas very much appreciated!
(As an aside, I should note I'm by no means an OpenGL expert so I might be getting confused, or at very least getting the terminology wrong.)GitHub
GitHub - rewbs/egldockertest: Egl in docker container / cog
Egl in docker container / cog. Contribute to rewbs/egldockertest development by creating an account on GitHub.
14 Replies
What kind of program to try to run?
My desktop image uses EGL and is derived from Selkies EGL for Kubernetes (linked in the repo). You'll need to install Nvidia display drivers because there is no /dev/dri on RunPod
An old-school audio-visualisation renderer (I'm the author of https://vizrecord.app/ which is client side β from there you can probably guess what I'm building π ).
Thanks so much, will take a look
Wow that looks like an impressive piece of work. Am I right in thinking your image re-installs the driver on every startup? If so I assume it's designed for a long-running pod rather than serverless tasks β and probably won't be sensible for my serverless usecase where a job execution would typically be under 30s.
Yeah that wouldn't make much sense unfortunately. I raised an issue with the Selkies EGL repo and their feedback was that the driver install shouldn't be necessary but my experience was llvmpipe rendering without it - But I am hopeful there is a solution
hey! been a while, but im running into the same problem. were you able to resolve?
Hey, nope β still can't get it to run on GPU. I'm resorting to running this process in parallel to other tasks (that do use the GPU) within the same serverless invocation! π
If you figure it out please report back! I wonder if it's something to do with the privileges made available to docker containers in Runpod vs locally.
Well it's decided from the software to use which hardware @rewbs
Maybe they don't support the gpu like Nvidia on Linux or what os it's using
Or wrong driver probably
Not sure how the software works so cant debug yet
The hardware is definitely there and supported. π My serverless endpoint kicks off 2 concurrent processes on the same serverless worker: one surfaceless ELG task (similar to the example codebase above), which fails to detect and use the Nvidia GPU, and one "standard" python ML process, which does find and use the Nvidia GPU.
I mean not the hardware the software
Different software might not work as other sodtwares
Oh. Which software are you referring to though? My code? (there are many layers of software in play here π )
Yeps that's what I'm not sure of, because I'm not be able to read the codes there but if there's some docs that says its compability maybe it can help
Oh what are egl?
Are those types of gpu or some custom hardware's?
I have no experience in these fields sorry so I might can't help you much
No worries, this is not an easy problem. EGL is a software layer above OpenGL which supports headless rendering.
Oh wow
You might wanna browse more onto how egl or the libraries, code they use to point to Nvidia drivers or nvidia's codes
finally got it actually by installing the right nvidia driver (5.35) on our debian slim image. weβre not doing serverless tho just pods for now