Stable Diffusion API Execution Time
I am posting this for a response from runpod support @flash-singh or anyone other than @justin
Is 30+ seconds execution time on a serverless 24GB GPU, via any API Docker 1111, for 768px image acceptable/expected? The exact same model/prompt/settings runs on a pod using the A1111 UI in 3 seconds. Why is serverless so much slower? This is regarding execution time -- not delay, queue, spinup...
0 Replies