Does the pod hardware differ a lot in US?
Hi,
We deployed several times in US region (secure cloud) with runpod cli, but the inference performance/speed differs a lot, even model loading time differs a lot, what's the reason? and how do I know what data center I'm using. it only shows 'US'.
thanks
12 Replies
i use lscpu, looks like their cpu are same model, and right now the only difference i can see is Nvidia driver version
GPU is 4090
the sysbench shows some memory difference
the top commands shows some cpu difference(the inference was stopped)
the load average and user time shows big differences, when they have same processes environments (infernece program stopped)
maybe the difference is because the host VM cpu usage,
i thinks the memory benchmark results proved the performance difference of my programs on two pods
is it possible that the busy cpu on host instance (maybe from other containers in the same instance), causes high memory contention and thus slow down the memory speed
Hi runpod, we really need your help, it's severely affecting our inference performance
i'm not using network volume, i just create pod from runpod cli and pass in arg "US"
and you have another problem where there is a different performance on pods with same specification?
Oh my bad, i think thats the way it is, the normal UI, not sure how to check it but can't you specify the region when creating using runpod cli?
but now the point is we see inference performance difference in different pods in US,
yes, have another problem, but still don't know root cause, i see some memory benchamrk difference on slow and fast pod in US
have you created a ticket for that?
And maybe you can specify which pod id is slow?
where to create ticket?
@lil_xiang
Escalated To Zendesk
The thread has been escalated to Zendesk!
there
cool,thank! i'll create one
i think it has been created check that button