Microsoft Florence-2 model in serverless container doesn't work
I'm trying to use Florence 2 models in ComfyUI workflow with serverless container and it returns with error:
raise RuntimeError(f'{node_type}: {exception_message}')\nRuntimeError: DownloadAndLoadFlorence2Model: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate
Accelerate library already installed in venv in network storage where comfyui runs, also I installed it in docker container. Maybe anyone know how to solve this problem? Thanks in advance11 Replies
Is accelerate listed in the requirements.txt file? If not you will need to add it there and rebuild the docker image.
I added accelerate to requirements.txt, rebuild image and create endpoint and got the same result:
{
"delayTime": 5722,
"error": "Traceback (most recent call last):\n File "/rp_handler.py", line 317, in handler\n raise RuntimeError(f'{node_type}: {exception_message}')\nRuntimeError: DownloadAndLoadFlorence2Model: Using
low_cpu_mem_usage=True
or a device_map
requires Accelerate: pip install accelerate
\n",
"executionTime": 34016,
"id": "df70ffee-8bf7-4304-9cdd-f6ef9349139c-u1",
"status": "FAILED",
"workerId": "7l3v5ol024c0kb"
}Does your handler have
in the Python source code?
accelerate lib imported in transformers.py in ComfyUi core
I suggest adding
at the top of your rp_handler.py file. As, from your error, it is failing on line 317 of that file because the accelerate module is not loaded.
It didn't help, i guess because error raised in transformers.py in core of ComfyUI, but I'm sure that accelerator is installed in all possible environments. And another interesting thing is if i deploy a Pod (not serverless) using the same network volume as in serverless - Florence 2 model would works without any problems
I added some code lines for debug inside a container
That's odd. Maybe it is a specific version of accelerator that it is looking for? If it was recently updated maybe roll back a version. If not, I'm not sure what else it could be.
nope its cause venv path mismatch
How to solve this problem?
responded to zendesk ticket
Create pod and mount storage as /runpod-volume
create venv at /runpod-volume
Use serverless