Outpainting
Hey guys . I want to create an Outpainting feature in my web app and I want to use an Inpainting SD model + Automatic1111 + Flask API that will do the Outpainting job.
But I want a 100% serverless solution that is not slow. How can I do that please? 🙂
43 Replies
Are you trying to automate this?
Like you just want to outpaint at set distance all the time?
didnt get, sorry
I just want to create an outpainting feature in my web app... where my users will be able to extend their images...
But is the user able to control the distance to outpaint?
Or is it i just send an image
and you just at your own preset will outpaint it
User can zoom in and out
in the canva
Yeah, that's a hard one. a1111 documentation on their API is very bad
like that
😦
How experienced are you with programming background?
Honestly, there is no easy way to do it:
I remember someone on this server said they used:
https://github.com/huchenlei/sd-webui-api-payload-display
And then (again) i have zero clue how to do it as Ive never executed this workflow, but they use this to peek into what the API is doing under the hood when they do image generation
GitHub
GitHub - huchenlei/sd-webui-api-payload-display: Display the corres...
Display the corresponding API payload after each generation on WebUI - GitHub - huchenlei/sd-webui-api-payload-display: Display the corresponding API payload after each generation on WebUI
then they reversed engineered it
so they can send the same API request too
But I feel that is a lot of work for sure
But is this related to the infrastructure ?
i mean the serverless thing
No this is not related to infrascture
I mean the serverless thing is you just launch an a1111 model
and you need to somehow programatically send it what u want to happen
So the serverless is not the issue
it's more the,
how to do this programatically issue
Where can I lauch the a1111 model?
Is it as fast as hosting the automatic1111 in a VM?
A VM is essentially what is happening in Docker (in a more light-weight method)
In your code:
something like that
or if you use like a webui server, then u can launch the server and send it api requests
I hosted automatic1111 + Inpainting SD model in a VM before anh the outpainting results were great... BuI could not keep it because of the cost.... thats why I need serverless solution
I see
Why didn't you just spin up and down the VM?
too slow?
Ive never heard of that before
As a service you spun up and down a VM? for other users?
interesting
sorry, im not dev
Okay, so this was just your own usage
I dont know what this is
No, im an an entrepeneur
There is a dev doing for me
But he got stuck
So Im searching for solutions...
The GCP VM spent 26$ in one day
1 and a hald day
0.60$ per hour
I see - okay the short answer then is - it's possible but requires a lot more work and investigation. Is it worth the time? that's up to the developer / objective of what you are trying to achieve / how much run-way you guys have.
A full-on virtual machine to login to wouldn't have ever scaled up as a Saas business which is what Im assuming your objective was.
I didnt get what you said about the VM
and yes, the web app is a Saas
No worries - tldr as you found it is too expensive - and depending what you mean by VM - it probably wouldn't scale.
Virtual machine on Google cloud platform
about the infra structure, what do you think I should do?
To have a 100% serverless solution that is fast
The problem isn't a serverless solution that is fast - as long as you got enough streaming requests, you can always scale it, and make it faster as necessary with runpod + flashboot it + work on optimizing it etc
The issue is figuring out how to programatically do an outpainting since to my knowledge the A1111 github respository documentation is bad on how to do these things through code.
The programmer you have would need to spend time to reverse engineer it to figure out how to do it through code and not a UI interface through like webui / some gradio app, but through like python code.
We already have an Flask API that does the outpainting from an SD inpainting mode + automatic1111
is this what you mean?
we got great results from it
@rafael21@ If you have a flask app then yes. that makes more sense. sounds like you are doing it programtically already then.
Essentially all you need to do is duplicate the setup you have for the payload you are sending to runpod and your programmer can look into setting it up
If the question is if the infrastructure is fast enough on runpod end - yes
they have other people using it for production services
The time it takes to investigate runpod / see if it fits your usecase / stress testing / optimizing it where you can, is a separate issue
is there any cold start time problem?
Yes
Why you need to flash boot / but the more workers you have the more you can avoid this
if you have the budget, you can also keep one minimum active worker
which will reduce your flashboot time + you also have a 40% discount from runpod on the minimum active worker just depends if you got enough requests coming through for that
for now, I would have only few requests from users...
do you have an idea
of cost
for like 20 requests per day
10 seconds each request
nope
something u need to try out
bc there is cold start time / execution time / time it takes to load the model / respond, so on
so many different things could go into it
Generative Labs
YouTube
Setting Up a Stable Diffusion API with Control Net using RunPod Ser...
In this step-by-step guide, we'll show you how to leverage the power of RunPod to create your own Stable Diffusion API with ControlNet enabled.
Here's what we'll cover in this tutorial:
Creating a Network Volume for robust model storage.
Installing Stable Diffusion and configuring it on the Network Volume.
Developing a Serverless Stable Diffus...
I found this video
seems to be what we need
Could be. Not sure what the control net api can do. But yes generative labs is good
Gl! I mean if u have a flask app tho already the logic to carry it over can just be added to the handler.py for serverless.
Bro
Is the serverless solution slow? I mean my users would take several seconds to get the outpainting?
Thanks
several seconds is good? lol.
it depends how big the image is
how much data
u got tk cold start a gpu or is it already on so on
again just a lot of diff optimizations
several seconds to run a ML model for image generation i would consider fast
😑
ExtendImageAI - Extend your images with generative AI
ExtendImageAI is a tool that allows you to extend your images with generative AI.
We are making a clone of this web app
It takes less than 10 seconds to do the outpainting
How can we get that with a serverless solution bro?
Unfortunately Im just a community member
Ull have to ask ur programmer to look into trying runpod
and experimenting
Ur asking a question with many variables to it
from image size, to the model ur using, to how optimized is the model, so on
sorry, I didnt know that
I thought you were a support member
Honestly even a runpod staff will give u the same answer. Ur asking something like: How can I build the whitehouse in china and will it take the same time exactly
There too many variables
That just my two cents
but ppl use runpod for production applications
so yes its possible
but the step by step guide on how to do it
is dependent on many optimizations
What is ppl?
people
Oh 😅