R
RunPod•11mo ago
Phando

Docker image and SD Models

My docker image for ComfyUI is going to contain several SD and SDXL models. Is it best to include those as part of the image or have them downloaded on startup?
Solution:
Part of the setup in the docker image @Phando
Jump to solution
78 Replies
Solution
justin
justin•11mo ago
Part of the setup in the docker image @Phando
justin
justin•11mo ago
Ull get the best speed and optimization imo I find as long as it about < 30 gb it is workable 15Gb to 25Gb is what i tend to aim for
Phando
PhandoOP•11mo ago
Thank you! <30 total or per file?
justin
justin•11mo ago
total for the docker image maybe up to 35gb
Phando
PhandoOP•11mo ago
Roger that, thanks again.
justin
justin•11mo ago
yeah if u plan to do this for serverless it looks like id also Split ur docker files into two such as https://discord.com/channels/912829806415085598/1194695853026328626 Where one docker file is: FROM RUNPOD:PYTORCH TEMPLATE DOWNLOAD MODEL DOWNLOAD MODEL Then your second dockerfile is: FROM FIRSTDOCKERFILE: COPY HANDLER.PY and whatever else this way your first docker file
Phando
PhandoOP•11mo ago
I am new to docker and runpod, thanks for the link
justin
justin•11mo ago
has your models all there and u dont need to always rebuild with models downloading
Phando
PhandoOP•11mo ago
That is only if things go over 30g right?
justin
justin•11mo ago
Nah, just in general i think is good practice b/c let's say that you want to change your function in handler.py if you had it in all one docker file might need to redownload all the models again if the layers in the docker file changed around
Phando
PhandoOP•11mo ago
makes sense
justin
justin•11mo ago
Also, u can test in GPU cloud and not just serverless so u can for ex. spin up a GPU Pod with the docker file 1, do some test stuff and then if u get a good python code u can use it for serverless
Phando
PhandoOP•11mo ago
This is how I get started making edits to the helloworld image right: docker run --gpus all -it --name my_container image_name /bin/bash
justin
justin•11mo ago
Ah that is also ok I guess i am different bc im on mac so I like to make a "GPU pod version" or a docker image I can run on runpod GPU pod since I dont have a GPU locally and then i can just write my handler.py through runpod and then if it looks good, copy and paste it locally, and bundle it into my "serverless docker" https://discord.com/channels/912829806415085598/1194693049897463848 Also ill say if u find ur building really slow to push and build i use this service the first 50 gb is free and so basically u can get some pretty large docker caches and more optimized than the way docker caches also my internet sucks personally so pushing from my computer can take like 8 hours on my home network lol
Phando
PhandoOP•11mo ago
Thank you for the links and info, I will let you know how I get on
justin
justin•11mo ago
Yeah no worries a more detailed example from my private repo: Dockerfile1:
# Use the updated base CUDA image
FROM runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04

WORKDIR /app

# Best practices for minimizing layer size and avoiding cache issues
RUN apt-get update && apt-get install -y --no-install-recommends \
ffmpeg \
python3-dev \
default-libmysqlclient-dev \
build-essential \
pkg-config \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --no-cache-dir \
mysqlclient \
torch==2.1.2 \
torchvision \
torchaudio \
xformers \
firebase-rest-api==1.11.0 \
noisereduce==3.0.0 \
runpod \
ffmpeg-python \
openai

# Install audiocraft from the git repository
RUN pip install git+https://github.com/facebookresearch/audiocraft#egg=audiocraft

COPY . .
COPY cert.pem /etc/ssl/cert.pem

RUN python /app/preloadModel.py
# Use the updated base CUDA image
FROM runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04

WORKDIR /app

# Best practices for minimizing layer size and avoiding cache issues
RUN apt-get update && apt-get install -y --no-install-recommends \
ffmpeg \
python3-dev \
default-libmysqlclient-dev \
build-essential \
pkg-config \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --no-cache-dir \
mysqlclient \
torch==2.1.2 \
torchvision \
torchaudio \
xformers \
firebase-rest-api==1.11.0 \
noisereduce==3.0.0 \
runpod \
ffmpeg-python \
openai

# Install audiocraft from the git repository
RUN pip install git+https://github.com/facebookresearch/audiocraft#egg=audiocraft

COPY . .
COPY cert.pem /etc/ssl/cert.pem

RUN python /app/preloadModel.py
This I can use on GPU pod Dockerfile2:
# Use the updated base CUDA image
FROM username/podname:1.0

WORKDIR /app
COPY handler.py /app/handler.py
# Set Stop signal and CMD
STOPSIGNAL SIGINT
CMD ["python", "-u", "handler.py"]
# Use the updated base CUDA image
FROM username/podname:1.0

WORKDIR /app
COPY handler.py /app/handler.py
# Set Stop signal and CMD
STOPSIGNAL SIGINT
CMD ["python", "-u", "handler.py"]
Phando
PhandoOP•11mo ago
So the goal is to have the docker do an installation on startup vs haveing an image of a system that is good to go?
justin
justin•11mo ago
No, the goal is to have the models already inside of the image 🙂 Do the installation / models all prebaked in the image and so the handler.py when it calls for your model, it already exists in the image, and doesn't need to download it downloading it takes time + storing it on a network volume (network volumes are essentially persistent hard drives u can use between starting up images) - takes time b/c external drives are just inherently smaller than being on the same machine / image so the best is to always have everything just bundled into docker
Phando
PhandoOP•11mo ago
Is your DockerFile1 named podname:1.0 Oh, you have one file on GPUPod and the other RunPod?
justin
justin•11mo ago
Dockerfile1, is just an ex. of whatever u might want to push to dockerhub U can push dockerfile one to dockerhub.com as like: yourdockerusername/imagename:version Then your second file is going to just be using what u pushed as the "base image", or a starting point image, to then build off of One for GPU Pod, and one for serverless. the GPU pod, I am using a base template from runpod, which comes with jupyter notebook, openssh etc. so that I can just use it on GPU Pod easily from runpod then the second one, i override the start command in the CMD and i just run the handler.py if u dont specify a CMD command, u use the base image CMD which is why in dockerfile1, u dont see a CMD command bc its running the runpod template CMD command by default underneath the hood which starts up an OpenSSH server and jupyterlabs server, so i can use it on runpod gpu pod service easily and then runpod's serverless, u can essentially think that it is their "gpu pod service" which is a persistent linux server computer with a gpu attached, but it just turn off and on automatically, based on how ur handler.py finishes working
Phando
PhandoOP•11mo ago
SOunds good, Ive got some reading and flailing to do.
justin
justin•11mo ago
Haha yeah, chatgpt is great Before i worked with runpod, never did docker before but chatgpt is a great starting point and depot makes it very easy to build off of if u want to give urself a smaller iteration time u can try to build an image for gpu pod urself like a hello world then have a handler.py as a second docker image, to just do something
Phando
PhandoOP•11mo ago
I was planning to start with hello world
justin
justin•11mo ago
and that way u can do smaller tests first
Phando
PhandoOP•11mo ago
I didnt agree with or understand some of the comfyUIWorker So I was planning on adding comfy to hello world
justin
justin•11mo ago
I would just note that if u make ur own custom image: https://github.com/justinwlin/FooocusRunpod U can see an ex here, u might get prompted if u use juypterlab about tokens / passwords if u log into it through gpu pod i talk about it here where i made one for fooocus; for serverless this is not an issue. just something to note when u start a jupyter server in general just if u end up seeing it and wonder what to do ull prob hit it at some point But u can see for fooocus, its a pretty easy dockerfile too:
# Use the specified base image
FROM runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04

# Run system updates and clean up
RUN apt-get update && apt-get upgrade -y && apt-get clean && rm -rf /var/lib/apt/lists/*

# Set the working directory to /
WORKDIR /

# Clone the Fooocus repository into the workspace directory
RUN git clone https://github.com/lllyasviel/Fooocus.git

# Change the working directory to /workspace/Fooocus
WORKDIR /Fooocus

# Install Python dependencies
# Using '--no-cache-dir' with pip to avoid use of cache
RUN pip install --no-cache-dir xformers==0.0.22 \
&& pip install --no-cache-dir -r requirements_versions.txt
# Use the specified base image
FROM runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04

# Run system updates and clean up
RUN apt-get update && apt-get upgrade -y && apt-get clean && rm -rf /var/lib/apt/lists/*

# Set the working directory to /
WORKDIR /

# Clone the Fooocus repository into the workspace directory
RUN git clone https://github.com/lllyasviel/Fooocus.git

# Change the working directory to /workspace/Fooocus
WORKDIR /Fooocus

# Install Python dependencies
# Using '--no-cache-dir' with pip to avoid use of cache
RUN pip install --no-cache-dir xformers==0.0.22 \
&& pip install --no-cache-dir -r requirements_versions.txt
All i do is use runpod + install some stuff etc U can run a gpu pod on runpod, walk through a manual installation step through the jupter lab terminal is what i do too and then i just ask chatgpt to do it for me xD and make a dockefile based off what i ran in terminal in gpu pod then it makes a custom docker for me, to then depoy myself
Phando
PhandoOP•11mo ago
easy peasy do you just put Comfy at the root of the docker image?
justin
justin•11mo ago
Ah no U prob want something like FROM runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04 INSTALL COMFY The "root" if your talking about the FROM should always be runpod it will be easier that way and then just add manual steps for installing comfy im not too familiar with what the cli commands are for comfy / never used it myself but its good to start from runpod b/c u can guarantee starting with some basic stuff like python, pytorch, cuda, openssh, jupyterlabs, etc.
Phando
PhandoOP•11mo ago
I was talking about the directory structure
justin
justin•11mo ago
oh u can install it wherever u can install it to root
Phando
PhandoOP•11mo ago
k
justin
justin•11mo ago
/app etc i like to install things under /app b/c if u have a network volume and it mounts to /workspace by default, the volume will hide what u had there. (but u prob wont have a network volume anyways, so not something for u to worry about) But yeah~ install wherever u see fit 🙂
Phando
PhandoOP•11mo ago
Making my GPU Pod now, do I want to tick ssh and jupyter?
justin
justin•11mo ago
tbh idk what u mean tick? ssh and jupyter is already going to be included in ur FROM runpod/pytorch stuff 🙂 and as long u dont override the CMD command
Phando
PhandoOP•11mo ago
Checkboxes, should I check ssh and the Start Jupyter boxes
justin
justin•11mo ago
will auto start those what checkboxes? i dont see any?
justin
justin•11mo ago
No description
justin
justin•11mo ago
at least if ur talking about these gpu templates and confused if ur talking about a dockerfile since its just a text file
Phando
PhandoOP•11mo ago
No description
justin
justin•11mo ago
OHH yea check them interesting xD mine are autochecked so i never thought of that / even processed that
Phando
PhandoOP•11mo ago
I gotta do a little setup for the ssh it looks like
justin
justin•11mo ago
ah dont worry about ssh then just do start jupyter notebook is prob fine enough
Phando
PhandoOP•11mo ago
K
justin
justin•11mo ago
https://discord.com/channels/912829806415085598/1194711850223415348 if u ever want to do it for future stuff u can follow these steps made it pretty easy so that all ur future pods u can ssh into but rn i dont think is necessary
Phando
PhandoOP•11mo ago
You have articles on everything! Do you work at RunPod?
justin
justin•11mo ago
no haha just a community member who got really into runpod im a frontend web developer mainly
Phando
PhandoOP•11mo ago
Thank you very much for your support
justin
justin•11mo ago
yeah hopefully should be able to start it up, and log into the jupyter server
Phando
PhandoOP•11mo ago
Once I spin up the gpu pod, am I going to have to start paying the .0006 running disk and excited disk costs? Or is that on demand?
justin
justin•11mo ago
tbh i forgot what the cost is for storage bc it usually so little
Phando
PhandoOP•11mo ago
$0.14 a day, not bad
justin
justin•11mo ago
the main cost is usually from the gpu run time not really the storage cost
Phando
PhandoOP•11mo ago
Ive been a bit all over the place today. Do you have any experience with this image? https://github.com/ai-dock/comfyui I would love to be able to edit the docker or config a touch, then build and run locally before sending to runpod. I have my windows machine all set up with a WSL ubuntu instance per the instructions found in https://github.com/blib-la/runpod-worker-comfy
GitHub
GitHub - ai-dock/comfyui: ComfyUI docker images
ComfyUI docker images. Contribute to ai-dock/comfyui development by creating an account on GitHub.
GitHub
GitHub - blib-la/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
justin
justin•11mo ago
I dont unfortunately
Phando
PhandoOP•11mo ago
Dang!
justin
justin•11mo ago
I dont really use comfy ui / llms on here but the creator is somewhere on here if u can search in general for that link u might be able to ping him in general chat if u got some generic questions to ask him bout
Phando
PhandoOP•11mo ago
Ill give it a go, thanks. I kinda love comfy Yeah, just general workflow on how to go from the git repo to running locally Thanks again
ashleyk
ashleyk•11mo ago
Check the templates section. I believe RobBalla has a post there for it.
RobB
RobB•11mo ago
That (my) image supports GPU cloud and serverless. It's getting some updates this week hopefully to support multiple formats in the serverless output. You can test the API in GPU cloud - it's mounted on /rp-api on the ComfyUI port
Phando
PhandoOP•11mo ago
Thank you all for the info. I am new to RunPod and Docker so the combo is a bit daunting. Per the instructions on https://github.com/blib-la/runpod-worker-comfy I have set up my windows machine with WSL, Ubuntu and Docker. My intent is to build and test my docker image in the WSL-Ubuntu and then upload it to RunPod or PodHub. My workflows use several custom nodes and some private loras. I was using timpietruskyblibla/runpod-worker-comfy:latest at first, but the ComfyUI in there is too old to support the manager. That is when I started looking at @RobBallas repo. Everything is making sense in there with the exception of how to build and test on my Unbuntu instance. I guess where I am unclear is the Docker workflow. @justin was super helpful with lots of articles but his workflow was a little different than mine. Is there a good tutoiral for how to get started with docker on windows and ubuntu and or a tutorial for developing my image on a mac and use the Windows Ubuntu as the server?
GitHub
GitHub - blib-la/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
justin
justin•11mo ago
Chatgpt! Haha. + Youtube videos on docker setups. But not sure what you mean for server. Essentially, you need Docker downloaded to your computer, so you can use "docker cli". This Docker CLI, let's you transform a text file, called a Dockerfile into an "image" which is a snapshot, which you can then build "containers" from (which are running instances). Once you get a docker installed, and a CLI working, then you should be able to build images and push to dockerhub. And for faster speed use Depot. For windows, sometimes u gotta run commands with sudo https://www.youtube.com/results?search_query=docker+setup+on+mac https://www.youtube.com/results?search_query=docker+setup+on+windows+wsl But once you are able to get docker running, and working in terminal, you should be able to run all commands, and push to dockerhub for usage
Phando
PhandoOP•11mo ago
Thanks a million, watching the turorials now and asking chatgpt all the things Not sure how I have avoided Docker for so long
justin
justin•11mo ago
Ive avoided docker for many many years xD ive had like 5 couress on it and it took runpod + chatgpt to explain me how to do stuff to actually do it its very very daunting without somethign to tell u the right commands
Phando
PhandoOP•11mo ago
Thank you for the post and I am looking forward to the updates. My goal is to have several custom nodes and a copyrighted lora as part of my comfy instance, I am not doing anything too crazy. Your template on runpod has a field in the custom setup for an alternate PROVISIONING_SCRIPT which if private would be ideal. Otherwise I have been trying to run locally and having issues. I have Ubuntu set up under Windows WSL per the instructions here (https://github.com/blib-la/runpod-worker-comfy). Can you please verify I am doing things right? Do I need to uncomment the AMD GPU lines in the docker-compose.yaml? What env variables need to be in my .env? (Currently IMAGE_TAG and CF_TUNNEL_TOKEN) I am seeing an error when running
docker compose up
docker compose up
: (useradd: cannot open /etc/passwd) Docker is starting to click, today I have docker Desktop on Windows talking to the docker images in the WSL Ubuntu setup. I am getting close to running locally!
justin
justin•11mo ago
NICE ill say ask chatgpt but rhere a command when u start a container to do a —gpu all flag or something that allows u to start the container with access to all gpus or if u have no gpu, can push it to dockerhub and run it on runpod gpu pod And then wouldnt need to worry bout that
Phando
PhandoOP•11mo ago
I have tried run with the --gpu flag as well as 'docker compose up'
justin
justin•11mo ago
ah idk anything about docker compose lol
Phando
PhandoOP•11mo ago
Ill try dockerhub, thanks
justin
justin•11mo ago
im not that smart at docker xD im the basic build container and push
Phando
PhandoOP•11mo ago
its in the readme
RobB
RobB•11mo ago
Some of that is getting stripped out in the next update (tomorrow likely). Just remove anything related to rclone mount from the compose file as I'm dropping support completely. I don't know anything about the blib-la repository. It's unrelated to my images and I have no experience building with docker on windows or Mac - I haven't used a windows desktop in many years. The provisioning script is for installing nodes and pulling models at runtime - the models go to /workspace/storage and map back to the ComfyUI model directories so you don't have to re-download on every run. It has to be a publicly accessible URL but it has access to environment variables when it gets pulled into the container so you can have it download private assets You'll also find a GitHub actions workflow in the .github directory. You can modify that to get auto builds when you push to the main branch of your fork
Phando
PhandoOP•11mo ago
Thanks! Ripping out the rclone stuff now Just got everything up and running using the comfyui:latest preconfigured template with a custom provisioning script. When it was all up and running, the manager was not installed in the comfyui instance. I see it in the provisioning file, am I doing something wrong?
papanton
papanton•11mo ago
@justin how does image size affect cold start time?
justin
justin•11mo ago
larger image is more cold start cause more dependencies need to be loaded but rlly the biggest cost is probably however big ur model is being loaded to vram if u got a really big Ml model and it gotta go into vram that usually the biggest cost And also if ur model isnt on the image itself, but rather in a network drive there is a larger cost to load the model from that storage over into vram
brgr
brgr•10mo ago
Hm, I have a 30GB docker image that takes >10min to load on cold start @ serverlerss! Whats going wrong here? I seems that some of the serverless clients have a pretty slow connection. Any hint how I can overcome this?
ashleyk
ashleyk•10mo ago
Serverless workers pull your Docker image in advance of serving requests. Your docker image size and pulling of docker image have nothing to do with cold starts. Your workers will only affected if you do stupid things like pushing to the same tag. By the way this topic is marked as solved so please log a new one, it is inappropriate to hijack a solved post with new questions.
brgr
brgr•10mo ago
Hey i have seen the thread "Should I be getting billed during initialization?". I would not call it stupid if one pushes to the same tag. Its more "stupid" to get charged for the download. Thanks for the tip!
ashleyk
ashleyk•10mo ago
Its bad practice to use the same tag, don't do it, if you are doing that you are doing it wrong and its most likely why you are having issues. Its fine for pods but not fine for serverless. Its stupid, breaks things and completely wrong.
Want results from more Discord servers?
Add your server