Kazrik
RRunPod
•Created by Kazrik on 9/4/2024 in #⚡|serverless
ComfyUI Serverless with access to lots of models
Hi, I have a pre-sales question. I am currently hosting a Discord bot and website for image generation using ComfyUI API endpoints on a local PC. it has around 1TB of checkpoints and loras available to be used, but as the number of users are growing I'm considering a serverless gpu where I can pay just for compute time.
With Runpod serverless, am I able to quickly deploy instances of Comfy, with any checkpoints/loras that the user wants for their generation? I was thinking of having the most popular models stored on runpod storage for fastest deployment and ones that are rarely used are downloaded on demand and swapped out to make room when needed.
Am I able to do this, or something similar?
8 replies