nerdylive
nerdylive
RRunPod
Created by nerdylive on 7/2/2024 in #⚡|serverless
Bug in runpodctl project?
No description
3 replies
RRunPod
Created by nerdylive on 7/1/2024 in #⚡|serverless
VLLM WORKER ERRROR
On fp8 quantization:
24 replies
RRunPod
Created by nerdylive on 5/23/2024 in #⛅|pods
This pod suddenly came into my account ( i didnt create it )
vi9vaz7fu77b52 Thats the pod id, already deleted it. I think because of vllm workers / template?
2 replies
RRunPod
Created by nerdylive on 5/12/2024 in #⛅|pods
Projects ( runpodctl): How to add registry Auth like docker login
Well how?
19 replies
RRunPod
Created by nerdylive on 4/20/2024 in #⚡|serverless
Slow connection speed
No description
14 replies
RRunPod
Created by nerdylive on 2/17/2024 in #⚡|serverless
I think my worker is bugged
No description
24 replies
RRunPod
Created by nerdylive on 12/22/2023 in #⚡|serverless
Vllm problem, cuda out of memory, ( im using 2 gpus, worker-vllm runpod's image )
No description
4 replies
RRunPod
Created by nerdylive on 12/22/2023 in #⚡|serverless
Hello, i think my template downloaded the docker template image while running my request
i've deleted the worker because its still running and i cancelled the request but heres my endpoint id: 489pa1sglkvuhf when i realize it was downloading the docker image instead of my model it was too late.
16 replies
RRunPod
Created by nerdylive on 12/19/2023 in #⚡|serverless
Cuda too old
2023-12-19T14:21:37.212836490Z The NVIDIA driver on your system is too old (found version 11070). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.: str 2023-12-19T14:21:37.214841659Z Traceback (most recent call last): 2023-12-19T14:21:37.214853339Z File "/stable-diffusion-webui/modules/errors.py", line 84, in run 2023-12-19T14:21:37.214858479Z code() 2023-12-19T14:21:37.214861749Z File "/stable-diffusion-webui/modules/devices.py", line 63, in enable_tf32 2023-12-19T14:21:37.214865109Z if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())): 2023-12-19T14:21:37.214868209Z File "/stable-diffusion-webui/modules/devices.py", line 63, in <genexpr> 2023-12-19T14:21:37.214871259Z if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())): 2023-12-19T14:21:37.214874339Z File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 435, in get_device_capability 2023-12-19T14:21:37.214877390Z prop = get_device_properties(device) 2023-12-19T14:21:37.214880450Z File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 449, in get_device_properties 2023-12-19T14:21:37.214883470Z _lazy_init() # will define _get_device_properties 2023-12-19T14:21:37.214886600Z File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 298, in _lazy_init 2023-12-19T14:21:37.214891770Z torch._C._cuda_init()...
14 replies
RRunPod
Created by nerdylive on 12/18/2023 in #⚡|serverless
Cost calculation for serverless
if this is my request, then how much am i billed: ( suppose i'm using the 24gb GPU ) { "delayTime": 5981, "executionTime": 7138, .....
21 replies
RRunPod
Created by nerdylive on 12/17/2023 in #⚡|serverless
will i be able to use more than 1 gpu per worker in serverless?
just wondering
7 replies