R
RunPod5d ago
DEOGEE

Rundpod serverless Comfyui template

I couldn’t find any comfyui template on runpod serverless
34 Replies
nerdylive
nerdylive4d ago
It's not public, last time I saw some in github if you like
DEOGEE
DEOGEEOP3d ago
I found the github. Thanks!
nerdylive
nerdylive3d ago
Nice
DEOGEE
DEOGEEOP3d ago
I"m running into an issue now , Do you know how i can modify the files in my network volume ? I"m trying to add some loras to my workflow I'm using a custom template so there is no connection option when i start the pod
nerdylive
nerdylive3d ago
Just access it with pod if you like to In pods it will be in /workspace Use pytorch with jupyter runpod template in pod
DEOGEE
DEOGEEOP3d ago
But i want to use a custom template I'm following the instructions on the foooocus github and apparently you have to set up the pod with their custom template
DEOGEE
DEOGEEOP3d ago
No connection options
No description
nerdylive
nerdylive3d ago
Well it's from the template I'm not sure how it's supposed to work maybe you read the Readme for guide? And edit the pod to expose more ports
DEOGEE
DEOGEEOP3d ago
GitHub
RunPod-Fooocus-API/docs/network-guide.md at NetworkVolume · davefoj...
RunPod serverless worker for Fooocus-API. Standalone or with network volume - davefojtik/RunPod-Fooocus-API
nerdylive
nerdylive3d ago
Isn't it for serverless?
DEOGEE
DEOGEEOP3d ago
I don't understand this Yes it is
nerdylive
nerdylive3d ago
Okay then its probably not for pod too Serverless templates are for serverless mostly unless you design it for pod too Best to differentiate them
DEOGEE
DEOGEEOP3d ago
So there would be no way to edit the files?
nerdylive
nerdylive3d ago
There is, just use the pytorch and jupyter to download your files from somewhere Why do you need that specific template that doesn't work with pod currently?
DEOGEE
DEOGEEOP3d ago
I'm using that template because i'm following their guide
nerdylive
nerdylive3d ago
You using that cpu template?
DEOGEE
DEOGEEOP3d ago
I just realized i'm using a gpu instead of a cpu But that should'nt matter right ? But yes i'm using that template but on a gpu
nerdylive
nerdylive3d ago
No, unless your running an app that requires the gpu driver and gpu itself What template or image name are you using Oh it doesn't require connection or manual setup In the guide it just says you have to run the specific image, that they has made them wait check on the logs until it says it's done I'm not advising on doing these tho, as for pre-built image that isn't checked can contain dangerous code or application s
DEOGEE
DEOGEEOP3d ago
That's fair Eventually i plan on building my own image , but for now i want to test out their image
nerdylive
nerdylive3d ago
Yup seems like there's an setup script already Yup sure.. Just saying so that you know it too
DEOGEE
DEOGEEOP3d ago
I think i setup everything up correctly because i sent a request on the serverless endpoint and i got an image back Thank You! I'll definately do that eventually. I just need a fast solution right now that's why i'm using their image The issue i have is with adding the loras , i.e editing the network volume files
nerdylive
nerdylive3d ago
Ah yea just use runpod default template on gpu pod ( cpu pods are bugged right now, you cannot attach network storage ) Like pytorch, then use the web terminal after you run it, to connect the pod
DEOGEE
DEOGEEOP3d ago
Oh so i can just ditch the template they recommended in their guide and use a runpod default template instead ?
nerdylive
nerdylive3d ago
Yeah their template is like a setup script only without open ports to connect
DEOGEE
DEOGEEOP3d ago
Thank you! I'm doing that right now For some reason i thought i had to use their template or else it would'nt work
nerdylive
nerdylive3d ago
I think so , their template may contain a specific setup script specific to the application in the worker template Maybe for easiness to setup
DEOGEE
DEOGEEOP3d ago
Seems like i was able to add the loras using a runpod template on a pod , But i sent a request and the lora is not applied to the images . Do i need to restart anything ie the network volume , the serverless endpoint ..
nerdylive
nerdylive3d ago
are they in the right paths? im not sure what or how the application works or how to add loras
DEOGEE
DEOGEEOP3d ago
I'm pretty sure they are
nerdylive
nerdylive3d ago
inside: /workspace/repositories/Fooocus/models/loras/ right?
DEOGEE
DEOGEEOP3d ago
Yes
nerdylive
nerdylive3d ago
i don't have an idea then maybe it has to do with the lora or maybe the request isn't right or the app isn't right
DEOGEE
DEOGEEOP3d ago
Based on the logs it seems like the lora 'cyberpunk.safetensors' is loaded correctly because if i use a lora name that doesnt exist it gives an error So probably a problem with the requests
Want results from more Discord servers?
Add your server