How to use the comfyui API when running it inside Runpod GPU pods
I can use the UI running on port 3000 using the template runpod/stable-diffusion:comfy-ui-5.0.0 but I am not able to call the API is there any documentation or examples for this scenario. I am using this example code top call the API https://github.com/comfyanonymous/ComfyUI/blob/master/script_examples/basic_api_example.py
Please help.
GitHub
ComfyUI/script_examples/basic_api_example.py at master · comfyanony...
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. - comfyanonymous/ComfyUI
6 Replies
RunPod can't document every application in existence, you can change line 106.
Change that to your pod URL +
/prompt
.Thank you for the reply. Yes I tried that but I am getting an error HTTPError: HTTP Error 403: Forbidden
I haven't setup any restrictions or API tokens for this gpu pod like we do for serverles API still I got this error. Is there a API token we can setup for endpoints running on gpu pods
Its up to your application
okay I was asking because I used the template directly without any changes so I thought there is some thing I missed. Do we need to configure any ingestion routes otherthan the HTTP ports are they enopugh to deal with POST requests? Thank you
runpod/stable-diffusion:comfy-ui-5.0.0
They are sufficient for a POST request, your application handles everything.
Thank you