ComfyUI Serverless with access to lots of models
Hi, I have a pre-sales question. I am currently hosting a Discord bot and website for image generation using ComfyUI API endpoints on a local PC. it has around 1TB of checkpoints and loras available to be used, but as the number of users are growing I'm considering a serverless gpu where I can pay just for compute time.
With Runpod serverless, am I able to quickly deploy instances of Comfy, with any checkpoints/loras that the user wants for their generation? I was thinking of having the most popular models stored on runpod storage for fastest deployment and ones that are rarely used are downloaded on demand and swapped out to make room when needed.
Am I able to do this, or something similar?
5 Replies
Yes you can
Solution
By using network storage and serverless
Fantastic, do you have any documentation for this kind of setup?
Well, there is runpod docs, do try check it out when you signed up 🙂
And there are few tutorials, examples too (on github)
Thank you 🙂