How does runpod work with custom docker images? Multiple questions.
I have some questions:
1. If I use my own dockerhub image, does it have to pull the image from dockerhub everytime?
2. I tried to use a community template (ComfyUI - AI-Dock) and the pull from ghcr is very slow. This is related to the first question. Is there something that affects the pull speed? This sucks because I still get charged as I wait for my image to be downloaded. This image takes about 40mins to download ~5Gb while other take couple of mins for 5GB. So its not a problem on my end.
3. Are there any workarounds for pulling an image everytime? or bumping the pull speed? Can I request an add to cache for this image? I will be using it for now on.
4. How to use credentials registry? What do I set for password? My docker account pass or I generate a token with docker?
5. Is there anyway to reconstruct a runpod image? For example, I want to take "runpod/pytorch:2.2.1-py3.10-cuda12.1.1-devel-ubuntu22.04" and reconstruct it with slight modifications. Not use it as a base but reconstruct it.
Thank you.
7 Replies
1. Yes
3. Sure you can request to add cache features on their system but I think not on a pod, pull speeds are based on the connection of the servers you chose and the registry
So ig not able to "bump up the speed" yet
4. Generate token and check the docker docs regarding authentication in docker cli ( docker login )
5. Reconstruct? What do you mean by not using it as a base?
To reconstruct it with slight modifications either you can rebuild the image (use it as a FROM command in a dockerfile) with the docker file or maybe run the image and just make the modifications in the running container
Yeah (5) you have to use it as a base image, there is no such thing as "reconstructing" it.
@digigoblin @nerdylive By reconstructing it I mean actually reconstructing it using another image as reference.
Maybe this process its called something else but here's what I did in this direction:
1. Find a reference image (runpod/pytorch:2.2.1-py3.10-cuda12.1.1-devel-ubuntu22.04)
2. Analyze the image layers from dockerhub or with dockerdesktop.
3. Copy paste all layers info and their respective commands in an excel
4. Gave the excel to chatgpt and request a dockerfile based on that excel
5. Chatgpt gave me a new dockerfile for a new image that is somewhat a replica of my reference image. Im not sure how accurate it is but it worked.
6. Reconstruct the reference image + my modifications
I was wondering if there is a more straight forward way of doing this.
Seems ChatGPT already helped you figure it out.
Oh that's the hard way and maybe inconsistent on reconstructing it, why not use it as a base image
It's the same way isn't it?
Is it the same way? Using a base image doesn't stack all the previous layers from the base image and locks me out from modifying those layers? allowing me only to add new layers on top? Im still a beginner with docker
Oh yea but logically you can modify it from dockerfile
Just more layer stacks ig
And it's not locked BTW. Just need to find the paths or right commands to do that