R
RunPod4mo ago
Andy

Optimizing Docker Image Loading Times on RunPod Serverless – Persistent Storage Options?

I'm working with a large Docker image on RunPod Serverless, containing several trained models. While I've already optimized the image size, the initial docker pull during job startup remains a bottleneck as it takes too long time to complete. Is there a way to leverage persistent storage on RunPod to cache my Docker image? Ideally, I'd like to avoid the docker pull step altogether and have the image instantly available for faster job execution. Thanks,
5 Replies
Marcus
Marcus4mo ago
Workers cache the image up-front in serverless so this should be a non-issue.
Andy
AndyOP4mo ago
So only for the first time it takes time to download the docker pull, but afterward workers cache it so there will be no issue?
Encyrption
Encyrption4mo ago
You can use network volume but using it increases the delay time, before startup. You can use larger images than you would think. The largest I have used, with the model baked into the container was ~ 35 GB. Although I am not sure what the upper limit is.
Andy
AndyOP4mo ago
@Encyrption Thank you so much for the detailed explanation. In my case, the problem is that my docker image is around 40GB, and it took a very long time, and sometimes not successful pushing the image to GHRC. May I know what kind of registry services do you use for faster and better speed?
Encyrption
Encyrption4mo ago
I use Docker Hub for public images and for private images I use Digital Ocean.
Want results from more Discord servers?
Add your server