nerdylive
nerdylive
RRunPod
Created by nerdylive on 8/20/2024 in #⚡|serverless
Request time out?
2024-08-20T12:37:52.613225186Z {"requestId": "462ba6df-29cc-4ed9-993a-4f6c60f99f0a-e1", "message": "Failed to return job results. | Connection timeout to host https://api.runpod.ai/v2/dvcoogy2i4pf6q/job-done/84aluci5iesmjp/462ba6df-29cc-4ed9-993a-4f6c60f99f0a-e1?gpu=NVIDIA+GeForce+RTX+4090&isStream=false", "level": "ERROR"} 2024-08-20T12:37:52.613286676Z {"requestId": "462ba6df-29cc-4ed9-993a-4f6c60f99f0a-e1", "message": "Finished.", "level": "INFO"} It happens occasionally why is this? it shouldn't happen right? I've created a ticket regarding this issue, endpoint id:
dvcoogy2i4pf6q
dvcoogy2i4pf6q
11 replies
RRunPod
Created by nerdylive on 8/16/2024 in #⚡|serverless
something went wrong *X when creating serverless vllm
No description
12 replies
RRunPod
Created by nerdylive on 8/14/2024 in #⚡|serverless
Why it seems like my job isn't assigned to a worker ( even after refreshing)
No description
43 replies
RRunPod
Created by nerdylive on 8/6/2024 in #⚡|serverless
Workflow works on pods but not comfyui on serverless
my workflow is working on pods comfyui, but not working when ran on the serverless... seems like it is stuck on trying to load model but exec time out after running like for 10+ minutes Both on serverless and pods is the same gpu with same vram
6 replies
RRunPod
Created by nerdylive on 7/2/2024 in #⚡|serverless
Bug in runpodctl project?
No description
3 replies
RRunPod
Created by nerdylive on 7/1/2024 in #⚡|serverless
VLLM WORKER ERRROR
On fp8 quantization:
24 replies
RRunPod
Created by nerdylive on 5/23/2024 in #⛅|pods
This pod suddenly came into my account ( i didnt create it )
vi9vaz7fu77b52 Thats the pod id, already deleted it. I think because of vllm workers / template?
2 replies
RRunPod
Created by nerdylive on 5/12/2024 in #⛅|pods
Projects ( runpodctl): How to add registry Auth like docker login
Well how?
19 replies
RRunPod
Created by nerdylive on 4/20/2024 in #⚡|serverless
Slow connection speed
No description
14 replies
RRunPod
Created by nerdylive on 2/17/2024 in #⚡|serverless
I think my worker is bugged
No description
24 replies
RRunPod
Created by nerdylive on 12/22/2023 in #⚡|serverless
Vllm problem, cuda out of memory, ( im using 2 gpus, worker-vllm runpod's image )
No description
4 replies
RRunPod
Created by nerdylive on 12/22/2023 in #⚡|serverless
Hello, i think my template downloaded the docker template image while running my request
i've deleted the worker because its still running and i cancelled the request but heres my endpoint id: 489pa1sglkvuhf when i realize it was downloading the docker image instead of my model it was too late.
16 replies
RRunPod
Created by nerdylive on 12/19/2023 in #⚡|serverless
Cuda too old
2023-12-19T14:21:37.212836490Z The NVIDIA driver on your system is too old (found version 11070). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.: str 2023-12-19T14:21:37.214841659Z Traceback (most recent call last): 2023-12-19T14:21:37.214853339Z File "/stable-diffusion-webui/modules/errors.py", line 84, in run 2023-12-19T14:21:37.214858479Z code() 2023-12-19T14:21:37.214861749Z File "/stable-diffusion-webui/modules/devices.py", line 63, in enable_tf32 2023-12-19T14:21:37.214865109Z if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())): 2023-12-19T14:21:37.214868209Z File "/stable-diffusion-webui/modules/devices.py", line 63, in <genexpr> 2023-12-19T14:21:37.214871259Z if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())): 2023-12-19T14:21:37.214874339Z File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 435, in get_device_capability 2023-12-19T14:21:37.214877390Z prop = get_device_properties(device) 2023-12-19T14:21:37.214880450Z File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 449, in get_device_properties 2023-12-19T14:21:37.214883470Z _lazy_init() # will define _get_device_properties 2023-12-19T14:21:37.214886600Z File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 298, in _lazy_init 2023-12-19T14:21:37.214891770Z torch._C._cuda_init()...
14 replies
RRunPod
Created by nerdylive on 12/18/2023 in #⚡|serverless
Cost calculation for serverless
if this is my request, then how much am i billed: ( suppose i'm using the 24gb GPU ) { "delayTime": 5981, "executionTime": 7138, .....
21 replies
RRunPod
Created by nerdylive on 12/17/2023 in #⚡|serverless
will i be able to use more than 1 gpu per worker in serverless?
just wondering
7 replies