R
RunPod2mo ago
Emad

LLAMA 3.1 8B Model Cold Start and Delay time very long

Hey, our cold start time always reaches over a minute and same with delay. For live running we need this to be quicker. We have tried with network volume as well but it doesnt change anything.
No description
16 Replies
nerdylive
nerdylive2mo ago
do active workers: 1
Emad
Emad2mo ago
Is there no other solution? to control costs as well
nerdylive
nerdylive2mo ago
well, you need to use that endpoint every minute or so
Emad
Emad2mo ago
It's usually not used every minute. At night our user count is less so it is not used as frequently. The reason runpod was pushed by our team was because we say it gave record cold start times.
nerdylive
nerdylive2mo ago
yeah, those are the only solutions that i know.. there are no free way of cutting cost any other way i believe not every time your worker becomes idle after running = requires cold start again
Emad
Emad2mo ago
But i thought for LLMs the cold start time was in seconds according to the blog posts
nerdylive
nerdylive2mo ago
Oh i rarely read the blogs, which one i wonder.. and how do you load your models? where from?
Emad
Emad2mo ago
I tried through network volume and normally too both give same result
Emad
Emad2mo ago
RunPod Blog
Run Larger LLMs on RunPod Serverless Than Ever Before - Llama-3 70B...
Up until now, RunPod has only supported using a single GPU in Serverless, with the exception of using two 48GB cards (which honestly didn't help, given the overhead involved in multi-GPU setups for LLMs.) You were effectively limited to what you could fit in 80GB, so you would essentially be
nerdylive
nerdylive2mo ago
oh okay, so maybe your loading time maybe result from download + loading to vram next time it loads it will be faster
Emad
Emad2mo ago
Yes next time it is faster but for a request after a while it takes over a minute
nerdylive
nerdylive2mo ago
cold start times there means flashboot on subsequent requests not the first time after a while i believe thats why serverless won't charge you when its not used... but if you want it to stay "warm" try active workers
Emad
Emad2mo ago
I thought flashboot was for the first request as well
nerdylive
nerdylive2mo ago
oh ya its not flashboot helps with subsequent request by not charging you while keeping the model warm, thats a way to understand it
yhlong00000
yhlong000005w ago
When you create an endpoint, the worker first needs to download the images. Depending on the size of the model you’re running, this can take some time. If you send a request during this initial phase, it will remain in the queue and won’t be processed because the worker isn’t ready to serve yet. Once the worker is initialized, performance will depend on your request traffic pattern, idle timeout settings, and the minimum number of workers you’ve configured. If your requests are sporadic and there are no active workers, you will experience a cold start delay. However, if you have a steady stream of requests, you’ll benefit from faster response times.
Want results from more Discord servers?
Add your server