Endpoint initializing for eternity (docker 45 Gb)

Hi! My docker image is about 45 Gb and I it has been about 20 hours since it started downloading it. https://www.runpod.io/console/serverless/user/endpoint/wzkav7ouarzdxv + there are 6 like this endpoints at the same time, all of them downloading Our gitlab docker registry has huge network output speed, I suppose it should not be botlnecked by it
4 Replies
NikolaiT
NikolaiTOP3w ago
It seems that downloading works only if I keep page with worken open. And when I close it downloading stops
yhlong00000
yhlong000003w ago
The download speed is quite slow. Are you hosting the image yourselves? I noticed that your worker is deployed globally, and it seems unlikely that all of our data center networks would be slow at the same time. When we download an image, we download five copies simultaneously for one endpoint, but your image registry doesn’t seem to good for these downloads you can try set max workers to 1 and see if the speed become better
NikolaiT
NikolaiTOP3w ago
Thank you! I will try to limit my update worker to 1, will see if it workds BTW is it hard possible to increase workers count? We are at pre-launch stage, and wondering if we can scale our infra with runpod in future
yhlong00000
yhlong000003w ago
yes, deposit $100 -> 10 workers, $200 -> 20 workers, $300 -> 30 workers, if you need more, you can open a support ticket.
Want results from more Discord servers?
Add your server