R
RunPod11mo ago
Jack

ailed to load library libonnxruntime_providers_cuda.so

Here is the full error: [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcufft.so.10: cannot open shared object file: No such file or directory I am running AUTOMATIC1111 on Serverless Endpoints using a Network Volume. I am using the faceswaplab extension. In this extension, there is the option to use GPU (by default, the extension only uses CPU). When I turn on the Use GPU option, I get the error. It would seem the Serverless Endpoint does not have the libonnxruntime_providers_cuda.so library. Can I install this particular library into the Servless Endpoint myself? Either onto the Network Volume or the Docker container?
5 Replies
ashleyk
ashleyk11mo ago
I suggest logging an issue against the faceswaplab extension repo, this is not a RunPod issue. You most likely installed onnxruntime instead of onnxruntime-gpu. Yep, you definitely installed onnxruntime instead of onnxruntime-gpu. I confirmed that libonnxruntime_providers_cuda.so is provided by the onnxruntime-gpu package.
ashleyk
ashleyk11mo ago
No description
Jack
JackOP11mo ago
@ashleyk Thanks for the tip. My network volume had both onnxruntime and onnxruntime-gpu installed. I tried to uninstall onnxruntime, but each time I run A1111, it will reinstall onnxruntime automatically since it is a requirement by faceswaplab. P.S. I'm actually using your runpod-worker-a1111 on my Network Volume, and installed Faceswaplab on top of it. I am able to use the GPU for faceswaplab on a colab notebook and also locally, so I had assumed it was maybe an issue with Runpod. But I could be wrong.
covff
covff11mo ago
i have the same issue but with the gpu cloud 2024-01-14 11:36:32.872356754 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcurand.so.10: cannot open shared object file: No such file or directory im using the official "RunPod SD Comfy UI" the problem is that the ubuntu installed doesnt have the libraries installed and, you cant install them on your own , i tried it , hours and hours and yeah, i had installed both onnxruntime-gpu and onnxruntime then uninstalled onnxruntime then unistalled both and installed optimum[onnxruntime-gpu] then i even tried multiple versions of torch and torchvision and, if the OS doesnt have those libraries, it doesnt matter what package we install Jack doesnt have that problem using faceswaplab locally or in a colab because those system have those libraries, the problem is runpod doesnt , you can "find /" as many times as you want
Jack
JackOP11mo ago
Yeah I gave up trying to aolve this problem. I'm just sticking to using CPU, which is a massive waste of GPU time but whatever. At least it works
Want results from more Discord servers?
Add your server