R
RunPod5mo ago
Emad

LLAMA 3.1 8B Model Cold Start and Delay time very long

Hey, our cold start time always reaches over a minute and same with delay. For live running we need this to be quicker. We have tried with network volume as well but it doesnt change anything.
No description
16 Replies
nerdylive
nerdylive5mo ago
do active workers: 1
Emad
EmadOP5mo ago
Is there no other solution? to control costs as well
nerdylive
nerdylive5mo ago
well, you need to use that endpoint every minute or so
Emad
EmadOP5mo ago
It's usually not used every minute. At night our user count is less so it is not used as frequently. The reason runpod was pushed by our team was because we say it gave record cold start times.
nerdylive
nerdylive5mo ago
yeah, those are the only solutions that i know.. there are no free way of cutting cost any other way i believe not every time your worker becomes idle after running = requires cold start again
Emad
EmadOP5mo ago
But i thought for LLMs the cold start time was in seconds according to the blog posts
nerdylive
nerdylive5mo ago
Oh i rarely read the blogs, which one i wonder.. and how do you load your models? where from?
Emad
EmadOP5mo ago
I tried through network volume and normally too both give same result
Emad
EmadOP5mo ago
RunPod Blog
Run Larger LLMs on RunPod Serverless Than Ever Before - Llama-3 70B...
Up until now, RunPod has only supported using a single GPU in Serverless, with the exception of using two 48GB cards (which honestly didn't help, given the overhead involved in multi-GPU setups for LLMs.) You were effectively limited to what you could fit in 80GB, so you would essentially be
nerdylive
nerdylive5mo ago
oh okay, so maybe your loading time maybe result from download + loading to vram next time it loads it will be faster
Emad
EmadOP5mo ago
Yes next time it is faster but for a request after a while it takes over a minute
nerdylive
nerdylive5mo ago
cold start times there means flashboot on subsequent requests not the first time after a while i believe thats why serverless won't charge you when its not used... but if you want it to stay "warm" try active workers
Emad
EmadOP5mo ago
I thought flashboot was for the first request as well
nerdylive
nerdylive5mo ago
oh ya its not flashboot helps with subsequent request by not charging you while keeping the model warm, thats a way to understand it
yhlong00000
yhlong000005mo ago
When you create an endpoint, the worker first needs to download the images. Depending on the size of the model you’re running, this can take some time. If you send a request during this initial phase, it will remain in the queue and won’t be processed because the worker isn’t ready to serve yet. Once the worker is initialized, performance will depend on your request traffic pattern, idle timeout settings, and the minimum number of workers you’ve configured. If your requests are sporadic and there are no active workers, you will experience a cold start delay. However, if you have a steady stream of requests, you’ll benefit from faster response times.
nerdylive
nerdylive5mo ago
mm sporadic nice vocabulary
Want results from more Discord servers?
Add your server