briefPeach
RRunPod
•Created by MattArgentina on 8/16/2024 in #⚡|serverless
Ashley Kleynhan's Github repository for ComfyUI serverless no longer available
Try this
4 replies
RRunPod
•Created by MattArgentina on 8/16/2024 in #⚡|serverless
Ashley Kleynhan's Github repository for ComfyUI serverless no longer available
4 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
😨😨oh I see….
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
first find your serverless worker's pods ids, then you use this graphql to edit the ports of your pod
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
Edit pod req
https://api.runpod.io/graphql?api_key=<you_api_key>
{
"operationName": "editPodJob",
"variables": {
"input": {
"podId": "jg9z32zai",
"dockerArgs": "",
"imageName": "weixuanf/runpod-worker-comfy",
"containerDiskInGb": 50,
"volumeInGb": 60,
"volumeMountPath": "/workspace",
"ports": "8080/http,8888/http"
}
},
"query": "mutation editPodJob($input: PodEditJobInput!) {\n podEditJob(input: $input) {\n id\n env\n port\n ports\n dockerArgs\n imageName\n containerDiskInGb\n volumeInGb\n volumeMountPath\n __typename\n }\n}"
}
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
@flash-singh could you give a bit more official guide about how to setup websocket connections on serverless workers? I think we all need this!
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
I see. Already calling it manually
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
How to use progress hook on serverless worker? I know you only have a webhook param to pass into a job but I thought that’s only triggered when the job finishes
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
You can go to your serverless page and open F12 network tab and click refresh icon button and you will exam then graphql then you will see a pods field on the myEndpoints field
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
No the pod is is different
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
I’ll get back to you once I’m home to find the code. You can also spin up a pod and open you F12 -> network tab and then edit the pod to expose a random port. then you will see the graph request pop up on it. That’s the one you want
185 replies
RRunPod
•Created by teddycatsdomino on 8/11/2024 in #⚡|serverless
Runpod serverless overhead/slow
You can. But you need to manually do a editPodJob to update the ports field of grpahql call on the endpoint’s pod
185 replies
RRunPod
•Created by NERDDISCO on 8/9/2024 in #⚡|serverless
Slow network volume
thank you! Yeah some benchmarks would be super helpful for us to choose which one to use in different situations
64 replies
RRunPod
•Created by NERDDISCO on 8/9/2024 in #⚡|serverless
Slow network volume
Ok once I try again, I’ll give you that data
64 replies
RRunPod
•Created by NERDDISCO on 8/9/2024 in #⚡|serverless
Slow network volume
I’m confused. Why there shouldn’t be significant difference for loading the model. I think physical storage should be significantly faster than network storage?
64 replies
RRunPod
•Created by NERDDISCO on 8/9/2024 in #⚡|serverless
Slow network volume
Because when I stored models in network volume, the step of loading model into GPU vram took way much longer than baking the model into container
64 replies
RRunPod
•Created by NERDDISCO on 8/9/2024 in #⚡|serverless
Slow network volume
@NERDDISCO Is network volume supposed to be slower than baking the model into the container image? Since if baking into container, the model is stored physically on the GPU machine, but if with network volume then the models needs to be transferred by network to load into GPU machine?
64 replies