RTX 4090 POD Cuda issue
Hi,
I'm trying to load a community POD and I run into this issue with the Cuda drives
ERROR: The NVIDIA Driver is present, but CUDA failed to initialize. GPU functionality will not be available.
2024-02-08T16:56:04.457515868Z [[ Initialization error (error 3) ]]
POD is US, RTX 4090
It's a NVidia container, details below:
=============
2024-02-08T16:56:04.419245167Z == PyTorch ==
2024-02-08T16:56:04.419246369Z =============
2024-02-08T16:56:04.419247661Z
2024-02-08T16:56:04.419248773Z NVIDIA Release 23.10 (build 71422337)
2024-02-08T16:56:04.419250697Z PyTorch Version 2.1.0a0+32f93b1
Is there any specific Cuda version for 4090s?
The same Docker runs without issues on A5000, 3090.
6 Replies
Install PyTorch Nightly
Or just use RunPod PyTorch template that installs a stable version of PyTorch
thanks guys, will try both
Did anything help you?
Hi @kopyl yes, and what I did actually was updating my docker to use the latest 12.2.
If you use the official Nvidia dockers, the latest NGC TensorRT, they are working just fine, thank you
Cuda 11.8 was working with TensorRT for me too