R
RunPod2mo ago
DEOGEE

Rundpod serverless Comfyui template

I couldn’t find any comfyui template on runpod serverless
65 Replies
nerdylive
nerdylive2mo ago
It's not public, last time I saw some in github if you like
DEOGEE
DEOGEEOP2mo ago
I found the github. Thanks!
nerdylive
nerdylive2mo ago
Nice
DEOGEE
DEOGEEOP2mo ago
I"m running into an issue now , Do you know how i can modify the files in my network volume ? I"m trying to add some loras to my workflow I'm using a custom template so there is no connection option when i start the pod
nerdylive
nerdylive2mo ago
Just access it with pod if you like to In pods it will be in /workspace Use pytorch with jupyter runpod template in pod
DEOGEE
DEOGEEOP2mo ago
But i want to use a custom template I'm following the instructions on the foooocus github and apparently you have to set up the pod with their custom template
DEOGEE
DEOGEEOP2mo ago
No connection options
No description
nerdylive
nerdylive2mo ago
Well it's from the template I'm not sure how it's supposed to work maybe you read the Readme for guide? And edit the pod to expose more ports
DEOGEE
DEOGEEOP2mo ago
GitHub
RunPod-Fooocus-API/docs/network-guide.md at NetworkVolume · davefoj...
RunPod serverless worker for Fooocus-API. Standalone or with network volume - davefojtik/RunPod-Fooocus-API
nerdylive
nerdylive2mo ago
Isn't it for serverless?
DEOGEE
DEOGEEOP2mo ago
I don't understand this Yes it is
nerdylive
nerdylive2mo ago
Okay then its probably not for pod too Serverless templates are for serverless mostly unless you design it for pod too Best to differentiate them
DEOGEE
DEOGEEOP2mo ago
So there would be no way to edit the files?
nerdylive
nerdylive2mo ago
There is, just use the pytorch and jupyter to download your files from somewhere Why do you need that specific template that doesn't work with pod currently?
DEOGEE
DEOGEEOP2mo ago
I'm using that template because i'm following their guide
nerdylive
nerdylive2mo ago
You using that cpu template?
DEOGEE
DEOGEEOP2mo ago
I just realized i'm using a gpu instead of a cpu But that should'nt matter right ? But yes i'm using that template but on a gpu
nerdylive
nerdylive2mo ago
No, unless your running an app that requires the gpu driver and gpu itself What template or image name are you using Oh it doesn't require connection or manual setup In the guide it just says you have to run the specific image, that they has made them wait check on the logs until it says it's done I'm not advising on doing these tho, as for pre-built image that isn't checked can contain dangerous code or application s
DEOGEE
DEOGEEOP2mo ago
That's fair Eventually i plan on building my own image , but for now i want to test out their image
nerdylive
nerdylive2mo ago
Yup seems like there's an setup script already Yup sure.. Just saying so that you know it too
DEOGEE
DEOGEEOP2mo ago
I think i setup everything up correctly because i sent a request on the serverless endpoint and i got an image back Thank You! I'll definately do that eventually. I just need a fast solution right now that's why i'm using their image The issue i have is with adding the loras , i.e editing the network volume files
nerdylive
nerdylive2mo ago
Ah yea just use runpod default template on gpu pod ( cpu pods are bugged right now, you cannot attach network storage ) Like pytorch, then use the web terminal after you run it, to connect the pod
DEOGEE
DEOGEEOP2mo ago
Oh so i can just ditch the template they recommended in their guide and use a runpod default template instead ?
nerdylive
nerdylive2mo ago
Yeah their template is like a setup script only without open ports to connect
DEOGEE
DEOGEEOP2mo ago
Thank you! I'm doing that right now For some reason i thought i had to use their template or else it would'nt work
nerdylive
nerdylive2mo ago
I think so , their template may contain a specific setup script specific to the application in the worker template Maybe for easiness to setup
DEOGEE
DEOGEEOP2mo ago
Seems like i was able to add the loras using a runpod template on a pod , But i sent a request and the lora is not applied to the images . Do i need to restart anything ie the network volume , the serverless endpoint ..
nerdylive
nerdylive2mo ago
are they in the right paths? im not sure what or how the application works or how to add loras
DEOGEE
DEOGEEOP2mo ago
I'm pretty sure they are
nerdylive
nerdylive2mo ago
inside: /workspace/repositories/Fooocus/models/loras/ right?
DEOGEE
DEOGEEOP2mo ago
Yes
nerdylive
nerdylive2mo ago
i don't have an idea then maybe it has to do with the lora or maybe the request isn't right or the app isn't right
DEOGEE
DEOGEEOP2mo ago
Based on the logs it seems like the lora 'cyberpunk.safetensors' is loaded correctly because if i use a lora name that doesnt exist it gives an error So probably a problem with the requests
DEOGEE
DEOGEEOP2mo ago
DEOGEE
DEOGEEOP2mo ago
This is unrelated but , i ran my main.py comfy file and its running
No description
DEOGEE
DEOGEEOP2mo ago
I'm using the runpod pytourch template , how can i access the comfy ui
DEOGEE
DEOGEEOP2mo ago
The only open port is that of jupyter
No description
DEOGEE
DEOGEEOP2mo ago
I exposed some ports but i get 'Bad Gateway' when i try to access them
No description
nerdylive
nerdylive2mo ago
Then the pod isn't running any application in that port Not ready"
DEOGEE
DEOGEEOP2mo ago
But it shows it's running on 8188 @nerdylive
No description
DEOGEE
DEOGEEOP2mo ago
If it helps , my pod is : https://33o7gxa40lsyop-8888.proxy.runpod.net/lab/workspaces/auto-y the password is 1234 if you run pip install -r requirements.txt and then python main.py the server should start running on port 8188
nerdylive
nerdylive2mo ago
Try seting to 0.0.0.0 The ip Not 127.0.0.1
DEOGEE
DEOGEEOP2mo ago
Where should i change if from I changed it from the main.py file
DEOGEE
DEOGEEOP2mo ago
but for some reason its still running on 127.0.0.1
No description
nerdylive
nerdylive2mo ago
From the command arguments when you run comfyui Check comfyui docs, then find something with keyword IP or host if I'm not wron --host 0.0.0.0
DEOGEE
DEOGEEOP2mo ago
Oh my GOD! It worked!! Thank you soo much!!
nerdylive
nerdylive2mo ago
Yup your welcome bro
DEOGEE
DEOGEEOP2mo ago
By any change are you familiar with the comfy ui worker : https://github.com/blib-la/runpod-worker-comfy/
GitHub
GitHub - blib-la/runpod-worker-comfy: ComfyUI as a serverless API o...
ComfyUI as a serverless API on RunPod. Contribute to blib-la/runpod-worker-comfy development by creating an account on GitHub.
nerdylive
nerdylive2mo ago
Why
DEOGEE
DEOGEEOP2mo ago
Your solution works , i can now view the ui on the port 8188 and generate my images successfully
nerdylive
nerdylive2mo ago
Yep of course it does
DEOGEE
DEOGEEOP2mo ago
But when i use the serverless endpoint with the same workflow json , i get an error 2024-11-19 14:00:45.074 [c7msa02ut39p9w] [info] invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because node FaceDetailer does not exist.', It seems like the custom nodes are not being recognized in the serverless endpoint even though i have the snapshot.json in the /runpod-volume (/workspace) directory If i try a request with no custom nodes on the runpod serverless endpoint , it works fine. Wierd thing is it works completely fine on ComfyUI but not on the serverless endpoint
nerdylive
nerdylive2mo ago
It says there's no that custom node I'm not sure what's wrong, maybe there is 2 comfyui installation or what That shouldn't happen since the serverless just starts a normal comfyui then sends a request in
DEOGEE
DEOGEEOP5w ago
I deceided to build the image myself @nerdylive
DEOGEE
DEOGEEOP5w ago
When i use it on runpod serverless , i get error [error] worker exited with exit code 127 and it keeps running indefinately The worker shows unhealthy: Exiting prematurely before requesting jobs. and its initializing indefinately
DEOGEE
DEOGEEOP5w ago
No description
nerdylive
nerdylive5w ago
What command is it executing or whatprogram or what code is it running maybe this: alue 127 is returned by /bin/sh when the given command is not found within your PATH system variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the binary you're trying to call. or another cause, im not sure with lack of details
Restlessperson
This server has recently suffered a network outage and may have spotty network connectivity. We aim to restore connectivity soon, but you may have connection issues until it is resolved. You will not be charged during any network downtime.
nerdylive
nerdylive5w ago
What is this?
DEOGEE
DEOGEEOP5w ago
It seemed like the error was caused because the line endings were converted to CRLF automatically in my windows machine. I was able to convert all the line endings to LF and now i'm getting a new set of errors when i push the image to dockerhub and use it as a serverless template on runpod 😉
No description
nerdylive
nerdylive4w ago
I see, try to reinstall the dependencies It seems like there is unmet packages
DEOGEE
DEOGEEOP4w ago
How will i do that? Normally i just build the image and push to dockerhub. And use the image as a template on runpod serverless. How should i reistall the dependencies ? You mean rebuild the image again?
nerdylive
nerdylive4w ago
Oh then maybe you didn't install the dependencies properly or some of the dependencies conflicted I guess Look at the login the error, read it The first lines especially
Want results from more Discord servers?
Add your server