Running a Dockerized Pytorch-based computer vision app
I would like to run a pytorch-based computer vision program using runpod. I have a Dockerized application that relies on Cuda 12.1 and Pytorch 2.3.1. How can I do this with runpod?
1 Reply
Deploy your docker image
Add your docker credentials to your runpod secrets if your docker image is private
Decide if you want to use a pod (available at all times and billed accordingly) or go serverless(has some delays for startup time and billed only for usage)
Follow the docs to set yourself up. For pod, I think you should be set up immediately
For serverless, you’ll need to have a custom rp_handler to call inference and return the payload to your own specs