Nickbkl
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
logs
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
no errors in the 😦
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
cant seem to get any responses
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
ok I'll try creating a new endpoint
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
I'm using 5 gpus per worker, keeps exiting with error code 1
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
gonna try awq but I am a noob, gonna do some research
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
thank you!
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
ok!
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
ahh ok! Is 3 enough
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
add workers?
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
something must be wrong with my setup
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
yes
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
5 items
"endpointId":"ikmbyelhctz06j"
"workerId":"2zeadzwvontveg"
"level":"error"
"message":"Uncaught exception | <class 'torch.OutOfMemoryError'>; CUDA out of memory. Tried to allocate 896.00 MiB. GPU 0 has a total capacity of 44.45 GiB of which 444.62 MiB is free. Process 1865701 has 44.01 GiB memory in use. Of the allocated memory 43.71 GiB is allocated by PyTorch, and 1.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables); <traceback object at 0x7f0a94eff580>;"
"dt":"2024-12-11 05:47:39.26656704"
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
with no quantization
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
It's past 20 mins now
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
I set up with vllm template without quant for now using a6000,A40, using 210gb of volume in Canada. I posted an inital request. How long will this take to initialize roughly?
49 replies
RRunPod
•Created by Nickbkl on 12/11/2024 in #⚡|serverless
Running llama 3.3 70b using vLLM and 160gb network volume
I think I'm going to use the suggested option (A6000, A40) and use aqm quant
49 replies