Rundpod serverless Comfyui template
I couldn’t find any comfyui template on runpod serverless
65 Replies
It's not public, last time I saw some in github if you like
I found the github. Thanks!
Nice
I"m running into an issue now , Do you know how i can modify the files in my network volume ? I"m trying to add some loras to my workflow
I'm using a custom template so there is no connection option when i start the pod
Just access it with pod if you like to
In pods it will be in /workspace
Use pytorch with jupyter runpod template in pod
But i want to use a custom template
I'm following the instructions on the foooocus github and apparently you have to set up the pod with their custom template
No connection options
Well it's from the template I'm not sure how it's supposed to work maybe you read the Readme for guide?
And edit the pod to expose more ports
This is the guide
https://github.com/davefojtik/RunPod-Fooocus-API/blob/NetworkVolume/docs/network-guide.md
Theres no instruction on how to do that
GitHub
RunPod-Fooocus-API/docs/network-guide.md at NetworkVolume · davefoj...
RunPod serverless worker for Fooocus-API. Standalone or with network volume - davefojtik/RunPod-Fooocus-API
Isn't it for serverless?
I don't understand this
Yes it is
Okay then its probably not for pod too
Serverless templates are for serverless mostly unless you design it for pod too
Best to differentiate them
So there would be no way to edit the files?
There is, just use the pytorch and jupyter to download your files from somewhere
Why do you need that specific template that doesn't work with pod currently?
I'm using that template because i'm following their guide
You using that cpu template?
I just realized i'm using a gpu instead of a cpu
But that should'nt matter right ?
But yes i'm using that template but on a gpu
No, unless your running an app that requires the gpu driver and gpu itself
What template or image name are you using
Oh it doesn't require connection or manual setup
In the guide it just says you have to run the specific image, that they has made them wait check on the logs until it says it's done
I'm not advising on doing these tho, as for pre-built image that isn't checked can contain dangerous code or application s
That's fair
Eventually i plan on building my own image , but for now i want to test out their image
Yup seems like there's an setup script already
Yup sure.. Just saying so that you know it too
I think i setup everything up correctly because i sent a request on the serverless endpoint and i got an image back
Thank You! I'll definately do that eventually. I just need a fast solution right now that's why i'm using their image
The issue i have is with adding the loras , i.e editing the network volume files
Ah yea just use runpod default template on gpu pod ( cpu pods are bugged right now, you cannot attach network storage )
Like pytorch, then use the web terminal after you run it, to connect the pod
Oh so i can just ditch the template they recommended in their guide and use a runpod default template instead ?
Yeah their template is like a setup script only without open ports to connect
Thank you! I'm doing that right now
For some reason i thought i had to use their template or else it would'nt work
I think so , their template may contain a specific setup script specific to the application in the worker template
Maybe for easiness to setup
Seems like i was able to add the loras using a runpod template on a pod , But i sent a request and the lora is not applied to the images . Do i need to restart anything ie the network volume , the serverless endpoint ..
are they in the right paths?
im not sure what or how the application works or how to add loras
I'm pretty sure they are
inside:
/workspace/repositories/Fooocus/models/loras/
right?
Yes
i don't have an idea then
maybe it has to do with the lora or
maybe the request isn't right
or the app isn't right
Based on the logs it seems like the lora 'cyberpunk.safetensors' is loaded correctly because if i use a lora name that doesnt exist it gives an error
So probably a problem with the requests
This is unrelated but , i ran my main.py comfy file and its running
I'm using the runpod pytourch template , how can i access the comfy ui
The only open port is that of jupyter
I exposed some ports but i get 'Bad Gateway' when i try to access them
Then the pod isn't running any application in that port
Not ready"
But it shows it's running on 8188 @nerdylive
If it helps , my pod is : https://33o7gxa40lsyop-8888.proxy.runpod.net/lab/workspaces/auto-y
the password is 1234
if you run pip install -r requirements.txt
and then python main.py
the server should start running on port 8188
Try seting to 0.0.0.0
The ip
Not 127.0.0.1
Where should i change if from
I changed it from the main.py file
but for some reason its still running on 127.0.0.1
From the command arguments when you run comfyui
Check comfyui docs, then find something with keyword IP or host if I'm not wron
--host 0.0.0.0
Oh my GOD!
It worked!!
Thank you soo much!!
Yup your welcome bro
By any change are you familiar with the comfy ui worker : https://github.com/blib-la/runpod-worker-comfy/
GitHub
GitHub - blib-la/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
Why
Your solution works , i can now view the ui on the port 8188 and generate my images successfully
Yep of course it does
But when i use the serverless endpoint with the same workflow json , i get an error
2024-11-19 14:00:45.074 [c7msa02ut39p9w] [info] invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because node FaceDetailer does not exist.',
It seems like the custom nodes are not being recognized in the serverless endpoint even though i have the snapshot.json in the /runpod-volume (/workspace) directory
If i try a request with no custom nodes on the runpod serverless endpoint , it works fine.
Wierd thing is it works completely fine on ComfyUI but not on the serverless endpoint
It says there's no that custom node
I'm not sure what's wrong, maybe there is 2 comfyui installation or what
That shouldn't happen since the serverless just starts a normal comfyui then sends a request in
I deceided to build the image myself @nerdylive
When i use it on runpod serverless , i get error
[error]
worker exited with exit code 127
and it keeps running indefinately
The worker shows unhealthy: Exiting prematurely before requesting jobs.
and its initializing
indefinatelyWhat command is it executing
or whatprogram or what code is it running
maybe this:
alue 127 is returned by /bin/sh when the given command is not found within your PATH system variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the binary you're trying to call.
or another cause, im not sure with lack of details
This server has recently suffered a network outage and may have spotty network connectivity. We aim to restore connectivity soon, but you may have connection issues until it is resolved. You will not be charged during any network downtime.
What is this?
It seemed like the error was caused because the line endings were converted to CRLF automatically in my windows machine. I was able to convert all the line endings to LF and now i'm getting a new set of errors when i push the image to dockerhub and use it as a serverless template on runpod 😉
I see, try to reinstall the dependencies
It seems like there is unmet packages
How will i do that? Normally i just build the image and push to dockerhub. And use the image as a template on runpod serverless.
How should i reistall the dependencies ?
You mean rebuild the image again?
Oh then maybe you didn't install the dependencies properly or some of the dependencies conflicted I guess
Look at the login the error, read it
The first lines especially