RunPod

R

RunPod

We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!

Join

⚡|serverless

⛅|pods-clusters

Use SDK to create Network Storage Volumes for Serverless Endpoints

Hello 👋 I am using the SDK to create a serverless endpoint. I know I can specify a volume ID when creating the endpoint via SDK, but is there a way to also programmatically create the network storage volume and push data to it (and then attach it to the endpoint)?

Historical jobs

Is there anyway I can call /status API to know the status of old jobs like 2-3 days old? Right now it only returns if its within 30 mins.

How to retrieve account spends using GraphQL

Hey there! From documentation it's really unclear how to retrieve e.g. my daily account spends using GraphQL. Documentation really lack info on how to structure queries which is especially confusing for someone not familiar with GraphQL (like me lol). Can you please help? Thanks!...

Runpod Servelerss really unreliable, delay time is way too high sometimes

I'm using a 24 GB vRAM serverless endpoint, the endpoint is way too unstable, 90% of the times the "QUEUE" takes a couple of seconds and then inference of the Omniparser v2 model takes between 3-8 seconds. This is a replicable result in Google Colab and other GPUs, nonetheless, every once on a while Runpod takes more than 40 seconds ofr even minutes to process a request. This happens when a specific worker bugs and then multiple request goes through it. The worker bugs for no reason and takes multiple minutes to do the job it should do in seconds. This only happens for some workers and when the same worker is used multiple times, it makes no sense and Runpod charges you multiple minutes of DELAY TIME, sometimes it does not even go through, meaning it says "IN_PROGRESS" as seen in the image for multiple minutes without finishing while Runpod charges you every second. In any other environment and even runpod this process takes seconds, the "IN_PROGRESS" print shows between 3-8 times only. This makes the endpoint highly unstable and way too expensive for a model that does not even use half of the vRAM....
No description

Worker other than Python

As I understand only python library is implemented for serverless workers? If I don't use python in my docker image (C# console app which is using cuda library) am I able just run http server in my worker application, listen to port set in "RUNPOD_REALTIME_PORT" (as far as I understand from runpod github) environment variable, and register some http routes for receiving job inputs, cancelling, etc.? If yes, where I can find list of routes that I need to implement on my http server to be able to act as worker?...

How to deploy a custom model in runpod?

I am planning to run a word2vec and other classification model in runpod. And I am kind of using tensorflow. Any idea how to deploy it in runpod serveless?

Build fail:"code":"BLOB_UNKNOWN"

error show: 2025-03-07 00:09:40 [INFO] sha256:9b6fcd7cc5df8c4b6b83df5513c224b403b862ae741e7cd666dc045d995b49d1: 7282047 upload bytes left. 2025-03-07 00:09:42 [INFO] Pushed sha256:9b6fcd7cc5df8c4b6b83df5513c224b403b862ae741e7cd666dc045d995b49d1 2025-03-07 00:09:44 [ERROR] 476 | method: "PUT", 2025-03-07 00:09:44 [ERROR] 477 | });...

400 Errors with allenai-olmocr on Serverless SGLang - Need Payload Help!

I'm trying to deploy the allenai/olmOCR-7B-0225-preview model (fintuned Qwen/Qwen2-VL-7B model) RunPod using the Serverless SGLang endpoint template, but I'm consistently getting 400 Bad Request errors when sending requests. running on L40S. I'm trying to send PDF documents for OCR, and I hope the issue is with the input payload. I've tried various common input formats based on the RunPod documentation and examples, but no luck so far. I've tried sending as a pdf file & page number as well as what I originally tried (pdf anchor text and image). in the code below, I am using the retrieved https://molmo.allenai.org/paper.pdf I'm using the allenai-olmocr model (Hugging Face link: https://huggingface.co/allenai/olmOCR-7B-0225-preview), deployed as a Serverless SGLang endpoint on RunPod. I deployed it the lazy way, providing huggingface handle and mostly default settings, and am wondering if I need to set up a handler and deploy using docker to get to work?...

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda

Hey everyone 👋 I'm trying to use Runpod serverless to run the https://github.com/nerfstudio-project/gsplat/ gaussian-splatting implementation. However, when building the project from source (pip install . of my Dockerfile below), I get the error: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda...

Can't get Warm/Cold status

I have tried "health" endpoint to retrieve Cold/Warm status of an endpoint but even having ready worker didn't mean the endpoint is warm. And it cold started. I need an indicator if the endpoint will cold start or it is still warm. Is it currently available in somewhere to retrieve that information and am I missing it? If not could you suggest a workaround if possible?...

Serverless Deployment runpod request Issue

Im working on deploying the qwen_2.5_instruct model through RunPod using the vLLM direct deployment method. The qwen_2.5_instruct model is designed to more than one image at a time along with the prompt. However, with the vLLM method, RunPod only allows one image per request. I need to pass multiple images in the following format: messages = [...
No description

how can I check the logs to see if my request uses the lora model

I deployed the qwen2-7B model using serverless and want to load the adapter checkpoint. My environment variable configuration is shown in the figure below, where LORA_MODULES={"name": "cn_writer", "path": "sinmu/cn-writer-qwen-7B-25w", "base_model_name": "Qwen/Qwen2-7B"}...
No description

Troubles with answers

I use MistralAI 7B Instruct model and i use standart settings in serverless and im getting strange answers. I tried different temperament values, but it didn't help. Please tell me how to fix it...
No description

Adding parameters to Docker when running Serverless

Hi. I need to add limit_mm_per_prompt to my Serverless Endpoint. How can i do it?

Serverless git integration rollback

Hi team! I've recently switched over to runpod serverless for production with the git integration. I have a concern about reliability in a failure scenario. Firstly, I notice that the builds take roughly 1.5 hours to build out and then for a worker to download + extract the image. Consider a bad release that starts getting rolled out - I can't see a button to stop it. Now, say it rolled out because I missed the process for whatever reason - I can't see a button to roll back to the previous good version....

Async workers not running

When using the /run endpoint I will receive the usual response: ``` { "id": "d0e6d88c-8274-4554-bb6a-0a469361ae20-e1", "status": "IN_QUEUE"...

Docker login to a specific registry

Hi, I'd like to login to nvcr.io using my API token. However, Runpod only allows me to set a username and password, but not let me specify a registry. How can I set this? I'm following the instructions here: https://build.nvidia.com/nvidia/audio2face-2d/docker...
No description

suggestion to create templates for repositories

Make it so that can use github repos instead of only docker images for templates