Better solution for 0 GPU stranded volumes
Since on-demand GPUs can get taken, would be great to have some better escape valves for getting our data off the volume. Right now, the 0.5 vcpu 512 MB RAM pod you give keeps killing my upload task. I would happily pay for more resources to speed up getting my data out. Would be nice to be able to attach a network volume to a pod after creation as well, or if you had cross-region network volumes. Network volume that only works in same region is of limited value, because a big reason for moving data around is that there's no GPUs in the region!
11 Replies
I'd suggest creating this on #🧐|feedback
also i think you might wanna try the backup upload to cloud storage from the website
i mean the cloud sync option
Interesting, does this get around resource constraints of the pod?
Nah it doesn't but its working I think
Tell me how it goes with that
until there is a better solution, I recommend having your volume in the RO data center. There you can get CPU pods for 8 cents/hr that have enough RAM for data transfer
if the GPU selection there fits your needs
or another one, you can use runpodctl transfers on a google colab
runpodctl send should work no problem with 512 MB RAM
I was able to reduce the resource requirements for data upload by modifying my aws cli config. however, my main point still stands which is that better data escape valves would be appreciated and I would be willing to pay for more cpu to get my data off faster.
Yeah I have seen other people also have issues with the low specs for starting a pod without a GPU so would definitely be nice to pay a little more for slightly beefier specs.
or not at all - why not have a file transfer endpoint to just access your data. 1 CPU can probably serve 100 filetransfers at the same time, but we need to rent a CPU just to get our data
RunPod have something on their RoadMap for S3 compatible storage apparently.
thats exciting!
i just hope that their price isnt so high lol
RunPod usually don't charge for bandwidth so hopefully not, because egress costs are the single most prohibitative cost in using S3.
I generally, if I have to touch network volume storage, make sure that I put it in a region that is CPU Pod available.
And then, i guess depends how much data you got, but I use a pod to pod transfer using croc or ssh the pods directly tgt, since I find the pod to pod transfer speed is pretty fast since the bottleneck should be almost non-existent.
But yeah, definitely a limitation 😦