Getting an error with workers on serverless
Running a docker image for comfyui and using face reactor :
2024-08-11T20:59:36.013496862Z File "/comfyui/custom_nodes/comfyui-reactor-node/scripts/reactor_faceswap.py", line 91, in process
2024-08-11T20:59:36.013503712Z result = swap_face(
2024-08-11T20:59:36.013518132Z File "/comfyui/custom_nodes/comfyui-reactor-node/scripts/reactor_swapper.py", line 230, in swap_face
2024-08-11T20:59:36.013529842Z source_faces = analyze_faces(source_img)
2024-08-11T20:59:36.013534952Z File "/comfyui/custom_nodes/comfyui-reactor-node/scripts/reactor_swapper.py", line 147, in analyze_faces
2024-08-11T20:59:36.013541262Z face_analyser = getAnalysisModel(det_size)
2024-08-11T20:59:36.013552282Z File "/comfyui/custom_nodes/comfyui-reactor-node/scripts/reactor_swapper.py", line 77, in getAnalysisModel
2024-08-11T20:59:36.013561642Z ANALYSIS_MODEL = insightface.app.FaceAnalysis(
2024-08-11T20:59:36.013571222Z File "/comfyui/custom_nodes/comfyui-reactor-node/reactor_patcher.py", line 59, in patched_faceanalysis_init
2024-08-11T20:59:36.013584022Z assert 'detection' in self.models
this error occurs randomly and not sure where it is coming from
18 Replies
I'm sure it has a longer log?
isnt it
check your dependencies for that custom node, and if you installed required models
it works on other workers
its quite spotty when new workers are spun
we have been running the image for the past month and this issue suddenly happened since last night
try notto give cropped logs maybe full or at least in a whole worker i think it might be cut
i don't know whats making that error
these are the logs
check your pip deps and the models needed, are they there?
yes its there
if its not how else would the rest of the workers run?
the 4m worker is operating fine
CHATGPT: The error message you've encountered is an AssertionError raised during the execution of the comfyui-reactor-node. Specifically, the error occurs in the getAnalysisModel function of the reactor_swapper.py script when it tries to initialize the face analysis model with insightface.app.FaceAnalysis. The assertion fails because the expected 'detection' model is not found within the self.models dictionary.
Suggest you check the model loading... specifically the the face analysis model.
these are the errors im getting for: 2024-08-11T23:13:17.650801261Z progress 0.75
2024-08-11T23:13:17.703740433Z [ReActor] 23:13:17 - STATUS - Working: source face index [0], target face index [0]
2024-08-11T23:13:17.726574610Z [ReActor] 23:13:17 - STATUS - Analyzing Source Image...
2024-08-11T23:13:17.726669049Z download_path: /comfyui/models/insightface/models/buffalo_l
2024-08-11T23:13:17.727149913Z Downloading /comfyui/models/insightface/models/buffalo_l.zip from https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip...
2024-08-11T23:13:17.836528031Z updating run live status ComfyUIDeployExternalImage
When are you seeing these downloads? With serverless you really don't want to download the models every request. This will cause massive delays. You want to either bake them into your Docker image or upload them to a network volume. This way, you don't have to download the model for each request.
Im seeing these downloads when a new worker starts
New worker new network storage?
Yep I agree 👍
im using the comfydeploy serverless for runpod template, and it doesnt allow loading custom models such as reactor on network volume
WORKDIR /comfyui/custom_nodes
RUN git clone https://github.com/mav-rik/facerestore_cf --recursive
WORKDIR /comfyui/custom_nodes/facerestore_cf
RUN git reset --hard 67f90bc6be976fb58169866155346b0da13bebee
RUN if [ -f requirements.txt ]; then python3 -m pip install -r requirements.txt; fi
RUN if [ -f install.py ]; then python3 install.py || echo "install script failed"; fi
How do i change this to store the models on the docker image
GitHub
GitHub - mav-rik/facerestore_cf: ComfyUI Custom node that supports ...
ComfyUI Custom node that supports face restore models and supports CodeFormer Fidelity parameter - mav-rik/facerestore_cf
i can send the entire dockerfile if needed
You can add COPY command(s) in your Dockerfile to COPY the models to where they go. It uses the following format:
The is the local path to where your model is and the you can either specify files or specify a directory and everything in said directory will be copied into your container.
That will copy everything from the app/ directory (where your Dockerfile lives) and copy it inout /app/ inside the container. This will copy my_model_checkpoint.ckp from the app directory (where your Dockerfile lives) and copy it to /some/path/goes/here/my_model_checkpoint.ckp inside the container. You'll have to adjust your paths and filenames accordingly.
That will copy everything from the app/ directory (where your Dockerfile lives) and copy it inout /app/ inside the container. This will copy my_model_checkpoint.ckp from the app directory (where your Dockerfile lives) and copy it to /some/path/goes/here/my_model_checkpoint.ckp inside the container. You'll have to adjust your paths and filenames accordingly.