Serverless GPU Pricing
Hello. I chose a 24 GiB configuration with the following GPUs: L4, RTXA5000, and RTX3090. I ran some benchmarks and noticed that using only RTX3090 is better for my use-case (faster execution times and so on).
Is the base pricing for all these 3 GPUs the same? That is, supposing for a moment that the delay times and execution times are the same across all GPUs, will the billing result in the same value regardless of the one I choose?
8 Replies
if they're in the same class then yes
look at the endpoint settings, theres the price for them
they are grouped
Some gpus with higher or lower vram has different pricings
OK. That's what I thought; just wanted to make sure.
Many thanks.
No problem happy to help
For instance one active worker per month would cost (16GB GPU):
$0,00012x3600x24x30 = $311,04
Correct @nerdylive ?
i usually count it like:
base price (active) (1s): $0.00012 / S
times 60s (1m): ($0.0072 / M)
1m* 60(1h): ($0.432 / H)
(i assume 730h/mo) so times 730: ... 315.36
yeah around that
Thanks a lot @nerdylive super quick!
Do you think that if I ping the endpoint periodically (with no actual computation, just to keep it warm), like every N minutes, the pricing could be lower while still having a warm endpoint?
with ping I mean a payload that contains no job and makes the function end immediately
hmm sure try it
I'll and let you know!