How to download models for Stable Diffusion XL on serverless?
1) I created a new network storage of 26 GB for various models I'm interested in trying.
2) I created a Stable Diffusion XL endpoint on serverless, but couldn't attach the network storage.
3) After the deployment succeeded, I clicked on edit endpoint and attached that network storage to it. So far so good I believe. But how do I exactly download various SDXL models into my network storage, so that I could use them via Postman? Many Thanks
2) I created a Stable Diffusion XL endpoint on serverless, but couldn't attach the network storage.
3) After the deployment succeeded, I clicked on edit endpoint and attached that network storage to it. So far so good I believe. But how do I exactly download various SDXL models into my network storage, so that I could use them via Postman? Many Thanks
23 Replies
quick deploy sdxl endpoint is based on difusser and it has model baked in
So if I wanted to use
DreamShaper XL
could I do this with that?
Or do I need to clone https://github.com/runpod-workers/worker-sdxl and add the DreamShaper XL
to it, push it to DockerHub and then pull it as serverless template?should work if model is in difusser format
https://github.com/runpod-workers/worker-sdxl/blob/10b177bff0ec746b48cf9a4e4c682797ad04d42c/src/rp_handler.py#L42
https://github.com/runpod-workers/worker-sdxl/blob/10b177bff0ec746b48cf9a4e4c682797ad04d42c/builder/cache_models.py#L34
GitHub
worker-sdxl/src/rp_handler.py at 10b177bff0ec746b48cf9a4e4c682797ad...
RunPod worker for Stable Diffusion XL. Contribute to runpod-workers/worker-sdxl development by creating an account on GitHub.
GitHub
worker-sdxl/builder/cache_models.py at 10b177bff0ec746b48cf9a4e4c68...
RunPod worker for Stable Diffusion XL. Contribute to runpod-workers/worker-sdxl development by creating an account on GitHub.
Ah so it's currently using the base model
stable-diffusion-xl-base-1.0
?
So do I have to clone the https://github.com/runpod-workers/worker-sdxl and change the two files manually fromstabilityai/stable-diffusion-xl-base-1.0
to stablediffusionapi/dreamshaper-xl
? Is there no environment variable to inject in instead?nope no env
if you would like something more flexible you can use https://github.com/ashleykleynhans/runpod-worker-a1111
GitHub
GitHub - ashleykleynhans/runpod-worker-a1111: RunPod Serverless Wor...
RunPod Serverless Worker for the Automatic1111 Stable Diffusion API - ashleykleynhans/runpod-worker-a1111
Ahh nice. But this repo is based on the classic SD, not SDXL, correct?
In that case for SDXL, I will try to close it and change the files myself. Then I need to add the model to my docker image and push it to DockerHub, correct? Then in RunPod I would create a template based on the dockerHub image and build a new serverless endpoint? So far my plan makes sense? 🙂
And will the model that the docker downloads be added to the attached network storage? I have a feeling because there is no environment variable passed in, the docker image is loaded in local storage, instead of network storage. I hope I'm wrong, because that would take a very long time each time I would
post
to the endpoint.for a1111 it supports sdxl just need to get safetensors file
for runpod-workers/worker-sdxl you would need to edit lines I told you rebuild image and push to dockerhub
runpod-worker-a1111 supports network storage
Thanks, yes, I 'm making progress with runpod-worker-a1111 . Is there a way to check from the dashboard how much space is left on the network-storage?
Not unless you attach it to a pod
ok, after I attached it to a pod how could I do that? df -h ?
I don't think that's possible on network storage because it shows everything
No, df -h will show you the space on the entire network storage, not just what is assigned to you
check usage in your pod in the runpod web console
It has a percentage indicator, which is also a bit crappy, would be nice to show actual space used if you hover over it or something
Ah yeah. It says 95%. Yeah would be good if it could give us an actual number instead of guess work.
Well you can do the math, don't need to do guess work but still its an unnecessary waste of time for something that could have better UX.
Sorry, I worded it badly. Of course I could do 5% of 50 GB. What I meant is that having the actual number is more accurate and convenient.
Yeah definitely, I agree that it can do with improvement
Would i be able to use SD3 with this or do I need to update some things?
Not sure if a1111 supports Sd3 yet
Check the a1111 github repo
And that repo, which a1111 version is it
OKay thanks. Looks like there isn't an updated version yet
This looks like a solid option though. https://github.com/blib-la/runpod-worker-comfy
GitHub
GitHub - blib-la/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
do you have any idea how i can format my workflow to fit the recommended structure:
Recommended:
https://github.com/blib-la/runpod-worker-comfy/blob/main/test_resources/workflows/workflow_webp.json
My workflow
GitHub
runpod-worker-comfy/test_resources/workflows/workflow_webp.json at ...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
Thats just the standard recommended workflow by comfy