Björn
Björn
RRunPod
Created by Björn on 7/11/2024 in #⚡|serverless
Higherend GPU Worker Stop Prematurely
Hi. I am trying to run a serverless endpoint with the Omost model, which requires more VRAM. When I accidentally started it with a 20GB all works fine, except the expected CUDA OoM. Configuring to use 80GB VRAM in EU-RO-1 the endpoint is created but the workers end prematurely constantly. Is there any way to figure out what and why this is happening? The logs do not really seem to help me here.
17 replies