Serverless SDXL Turbo endpoint returning seed inconsistent images
I deployed a serverless SDXL Turbo endpoint but it returns different result despite using the same input seed and settings. Works fine with SDXL. Am I missing something?
28 Replies
Sounds like the seed is being ignored, check the code to figure out why.
i think thats how it should work, "seed" doesn't really define the result to be "that one image", but it lets you to likely get an identical image when the "random generation" generates your images
( this is what i think about the diffusion process )
if you're using comfyui and using the same seeds to generate multiple times, your image might be cached in the ksampler, and it will just re decode the same latent sample to the vae and result to the same identical image as from the previous generations
How can I check the code from the container I am deploying?
https://hub.docker.com/r/runpod/sdxl-turbo
Check the github repo search it uo
What is odd, is that I don't get this issue when using SDXL in Runpod or SDXL Turbo in Comfy, I get consistency. I am building a streaming like app but I really cannot do without the consistency.
It should be correct if I'm not wrong from my memories checking the repo out
Wow steaming with sdxl interesting
On comfyui does it samples again? Like on the ksampler
Seed is not used anywhere as far as I can tell:
https://github.com/runpod-workers/worker-sdxl-turbo/blob/main/src/handler.py
GitHub
worker-sdxl-turbo/src/handler.py at main · runpod-workers/worker-sd...
Template for building custom RunPod Endpoint API workers using SDXL Turbo for image generation. - runpod-workers/worker-sdxl-turbo
How do we know this is the code that is pulled by docker image runpod/sdxl-turbo?
Because the image you're using is a RunPod image and all RunPod images are open source.
Seems like so
On the main branch mostly is the latest pushed tag
You can edit it and build your own docker image if you like to
I tested sdxl-turbo in ComfyUI again and I do get the same output given the same parameter. I made sure this was not a cache hit by restarting ComfyUI.
But I don't see any link between that repo and the image. The worker repo you linked does not pull the image
runpod/sdxl-turbo
Maybe it's on the args
Let me check on this inn a bit
I need to learn how to "open" a docker image to see what is inside next
And by the way, thank you so much for helping and trying to figure it out
You can set an active worker, start web terminal and view the source code to the hander.
okay found it
ucodia, you can use this to set seeds in generation
Haha from docker hub you can go to layers to see whats being added
But i assure you if you're using the serverless worker and its pulling from the runpod/**** tag then the github link there is most likely the repo
I actually realized that the model only seems to take the positive prompt in account, changing the size or steps had no effect either!
I did have to use the "dev" tagged image because there was nothing else, it might have been abandoned? https://hub.docker.com/layers/runpod/sdxl-turbo/dev/images/sha256-9855a54c3f7434f472bfc6604313e8bcdfec0390d1d8ba027e690f683c2ab1e7?context=explore
Hmm i guess so
{
"input": {
"prompt": "An image of a cat with a hat on.",
}
}
How are you inputting size, steps?
job_input = job['input']
prompt = job_input['prompt']
i only see prompt there ( postive )
I input all of it,
I am using the Runpod SDK for JavaScript
I followed this guide: https://docs.runpod.io/tutorials/serverless/gpu/generate-sdxl-turbo#deploy-a-serverless-endpoint
But the runpod/sdxl-turbo:latest did not even exists, I had to use runpod/sdxl-turbo:dev
ah idk, maybe it got deleted then
i guess the best way now is to create your custom handler
If you have image at local, you can inspect the image like this:
Thank you for the tips, really appreciate it. I really hoped I would not have to learn how to code/configure this as it was documented officially 🥲
It's on docker docs
Hmm yeah I think runpod basic ready to use templates on serverless especially the one using hf workers doesn't provide much to customize
You can use a1111 worker for that if you'd like to
Search it up, a1111 worker Ashleyk, github
I finally ended up modifying this one: https://github.com/runpod-workers/worker-sdxl
I'll see if I can change the handler code to make it work 🙃
Sure hahahah
But imo it's easier to just setup a1111 worker
And you can put the model in the right directory
And then voila just use the API
Sounds good I’ll give a try to auto1111 api then