R
RunPod•4mo ago
Kazrik

ComfyUI Serverless with access to lots of models

Hi, I have a pre-sales question. I am currently hosting a Discord bot and website for image generation using ComfyUI API endpoints on a local PC. it has around 1TB of checkpoints and loras available to be used, but as the number of users are growing I'm considering a serverless gpu where I can pay just for compute time. With Runpod serverless, am I able to quickly deploy instances of Comfy, with any checkpoints/loras that the user wants for their generation? I was thinking of having the most popular models stored on runpod storage for fastest deployment and ones that are rarely used are downloaded on demand and swapped out to make room when needed. Am I able to do this, or something similar?
Solution:
By using network storage and serverless
Jump to solution
5 Replies
nerdylive
nerdylive•4mo ago
Yes you can
Solution
nerdylive
nerdylive•4mo ago
By using network storage and serverless
Kazrik
KazrikOP•4mo ago
Fantastic, do you have any documentation for this kind of setup?
nerdylive
nerdylive•4mo ago
Well, there is runpod docs, do try check it out when you signed up 🙂 And there are few tutorials, examples too (on github)
Kazrik
KazrikOP•4mo ago
Thank you 🙂
Want results from more Discord servers?
Add your server