About volumes and images
Hi All,
Great product so far!
I have a few questions (apologize if it's already somewhere in the docs, haven't found it).
I'm also not sure if this should go here or somewhere else.
1. I suppose not (yet), but I was wondering if there is a way to cache docker images (eg. on the network drive).
2. What's the "runpod-volume" that is used for different caches in the base docker image?
Is that in the pod volume (ephemeral) or is it on the networked storage, or something else?
(I also see a comment "Shared python package cache", across what is that shared?)
3. I'd like to use the same docker image across several execution setups (like local hw, runpod, etc). It would make that easier if I moved those ENV values to the serverless endpoint configuration, rather than hardcoding them in the image. I don't see any issues with that, but just to check if that's fine.
4 Replies
1. Not possible.
2. Network storage is mounted at
/workspace
in GPU Cloud and at /runpod-volume
in Serverless. There is no "Shared python package cache".
3. You can't really use the same image for local and serverless, serverless requires a handler that implements the RunPod SDK and then the serverless infratructure handles it. Best to have one image for GPU cloud and local, and another image for Serverless that implements the handler etc.Is there a problem if the RunPod SDK is installed in an image I use locally? I just have a different command, that does not start the runpod serverless locally.
You can give it a try
1. we automatically do this for you, its cached in regions with network storage
3. You can use an env variable to make the image behave differently for sls than normal pod, its doable but not ideal