Parallel processing images with different prompt
Hi!
I am running a1111 on serverless.
Is it possible to generate images in parallel with different prompt? As far as I know on sd web ui it's possible only to set the batch size, but it will use the same prompt and moreover it needs to have an external queue manager.
Is there any serverless pod or some particular sd web ui configuration that allow me to do it?
Thank you
7 Replies
probably not most workers and web ui uses standard que system where they pick request generate image takes next
You could just send different requests to serverless for each prompt and if you have enough workers, they can be handled in parallel
damn, and how all those chatbots out there are able to manage tons of image generation request simultaneously? it seems strange that they have hundred of running server
You can achieve that with lots of max workers
guess it's the only solution 😅 is it correct that even if I have network volumes the additional workers takes the same time to load? I mean, every time the worker starts it download everything again, and unfortunately it's a 30gb docker
thanks a lot to both in the meanwhile 🙂
The 30GB docker image is pulled in advance by your workers and the workers will be idle and waiting for requests after the image is pulled. The image is not pulled when the workers need to handle a request, it is already pulled by then.
Your models etc will be loaded from network storage and can take a minute or 2 to load on a cold start.
ok thanks a lot!