Runpod GPU use when using a docker image built on mac
I am building serverless applications that are supposed to be using gpu, while testing locally, the pieces that kick off functions that are meant to be using gpu are denoted with the common:
device: str = "cuda" if th.cuda.is_available() else "cpu"
this is required so that when running locally on a mac, the cpu device is used. I would think that in a docker image built on a mac, but with a amd64 machine type specified in the build command, that when its deployed on a server that has a cuda base image, cuda gpu would be used. but that does not seem to be the case.
I have not been able to understand why that is for the longest time. My runpod serverless pods only show cpu usage when tested.
Any advice?
device: str = "cuda" if th.cuda.is_available() else "cpu"
this is required so that when running locally on a mac, the cpu device is used. I would think that in a docker image built on a mac, but with a amd64 machine type specified in the build command, that when its deployed on a server that has a cuda base image, cuda gpu would be used. but that does not seem to be the case.
I have not been able to understand why that is for the longest time. My runpod serverless pods only show cpu usage when tested.
Any advice?