cudaGetDeviceCount() Error
When importing exllamav2 library I got this error which made the serverless worker stuck and keeps on spitting an error stack trace. The error is:
What's about this error? Is this about the library or is there something wrong with the worker hardware that I've chosen? and why doesn't the error stop the worker? It keeps on running for 5mins without I even realizing.
5 Replies
What GPU and PyTorch version?
Looks like your Docker image probably uses CUDA 12.1, but you didn't use the CUDA filter and got a worker with CUDA 11.8 or 12.0.
I checked these only, torch is 2.1.2
ah i see, thanks just realized that i didn't have that filter on, enabled it already
but why don't the worker return an error? it doesn't get stopped automatically
You probably need to scale workers down to zero and back up again for the change to take effect.
No, you need to check stuff during development and not assume everything is working 🙂