Request queued forever

Hi, I am facing a problem while interacting with my runpod serverless endpoint. When I send the first request it gets queued and server don't get started, ideally from a cold start it should take 5 mins at max but its not initializing even after 15-20 mins, I have already deleted the ednpoint and created it again, it fixed the issue once but getting the same problem now. I am using docker image with custom tag. The logs say woker is ready, starting container, remove container, it throws no error but gets stucked here. My max worker is 1, idle timeout 300 secs with flashboot enabled, please note that I have deployed an image to image model, I don't understand what's causing the issue.
5 Replies
nerdylive
nerdylive5w ago
Hi, is it your first worker code maybe check if you have called "serverless.start(handler: handler)"? in your worker code. Or if you can send how is your worker code then it'll be better too
bilaal.qaasim
bilaal.qaasimOP5w ago
yes this is my first worker, it worked a couple of times but this forever queue is also faced multiple times, I can't share the exact worker code but below is the structure of my code def setup_environment(): token = os.getenv("DUMMY_TOKEN") # Perform any authentication or initialization pass def load_models(): # Load and initialize required models pass def validate_inputs(inputs): required_fields = ["user_text", "image_data"] for field in required_fields: if field not in inputs: raise ValueError(f"Missing required input: {field}") def process_image_data(image_data): if image_data.startswith("http"): return Image.open(requests.get(image_data, stream=True).raw).convert("RGB") else: return Image.open(BytesIO(base64.b64decode(image_data))).convert("RGB") def generate_images(prompt, input_image): outputs = [] for i in range(4): # Example loop for multiple generations # Placeholder logic for generating an image dummy_image = Image.new("RGB", (256, 256), color="gray") # Save and encode as Base64 path = f"/workspace/dummyoutput{i + 1}.jpg" dummy_image.save(path) outputs.append(file_to_base64(path)) return outputs def file_to_base64(file_path): with open(file_path, "rb") as image_file: return base64.b64encode(image_file.read()).decode("utf-8") def handler(job): inputs = job.get("input", {})
try: validate_inputs(inputs) # Validate input data except ValueError as e: return {"error": str(e)} # Extract and process inputs user_text = inputs.get("user_text", "default text") image_data = inputs["image_data"] input_image = process_image_data(image_data) # Generate images (placeholder logic) outputs = generate_images(user_text, input_image) return {"outputs": outputs} setup_environment() load_models() runpod.serverless.start({"handler": handler}) I have already wasted around 50 bucks while retrying, is this an issue on runpods end? @flash-singh @Merrell
nerdylive
nerdylive5w ago
Whoa that is alot Can you share your endpoint id That's erroring
Poddy
Poddy5w ago
@bilaal.qaasim
Escalated To Zendesk
The thread has been escalated to Zendesk!
nerdylive
nerdylive5w ago
I think the runpod team can check more on your endpoint to see what's wrong
Want results from more Discord servers?
Add your server