Billed for endpoint stuck in state: Service not ready yet. Retrying...
Hey there, we have a serverless endpoint that seems to be stuck in a state: Service not ready yet. Retrying...
It has been stuck in this state for about 18 hours now and it appears we're getting billed for this (already $22 for a day) even though no resources are being used. We can't get it out of this state and we don't want to put more money on our account until this is resolved.
Is there anything we can do to stop this? Is this a common problem? Or a rare glitch?
I submitted a support ticket online as well with further details about the specific endpoint ID
62 Replies
Rare glitch or more likely about your code used in the serverless handler is not proper .. Let's wait for staffs to take on that case
You should be able to stop the running worker whenever you want on your endpoint page, and you also can check the logs on each of the running workers
Is it not displaying all of that in your website?
Thanks @nerdylive I was mistaken on a few things. The total billed time was 14hr.
Looking through the logs, this was happening for 2 hours. But I think it was happening to multiple workers, so perhaps that is how it added up to 14hr charge.
Could it perhaps have to do with workers trying to connect to network storage and failing? And then we are getting billed for that?
Ic.
Yes it could be.. When did it happen?
Not really sure tho if its that, it's most unlikely
Which region is it in also?
Also Has it ever worked before or you just deployed it
Region: Oregon
Time: July 5, 11:51pm MT – July 6, 1:58am MT
We've had the same endpoint deployed for roughly a month or so now.. maybe 3 weeks? Been working great AFAIK. But I have seen these messages before.
So it's working now?
I think, "Service not ready retrying" is from your handler code
Can you check what does it do, Does it check some internal service that might have not boot up yet in your worker
And no other logs?
Additional info if it helps.
You can see total execution time is 164s, Cold start time is 95s so total
(164+95)*0.00044 so should be roughly $0.12.
I'm using @ashleyk's worker image and I can't find any reference to that log in the code:
https://github.com/ashleykleynhans/runpod-worker-a1111
Doh here it is: https://github.com/ashleykleynhans/runpod-worker-a1111/blob/main/rp_handler.py#L44
Yeah
So wait for service... not sure what this method does tbh
it simply waits for http://127.0.0.1:3000 to be accessible inside your worker
can you launch an pod using that network volume
use the runpod pytorch template one
then run this command on the terminal from jupyter:
copy the output here
Ah does it have to be through jupyter? Can I ssh?
I just started the pod without Jupyter, hah
sqlite3.DatabaseError: database disk image is malformed
Hmm
sure thing
Can you check the usage of your network volume in your pods page?
howmuch % is it?
77% of the volume, 0% of container
sqlite3.DatabaseError: database disk image is malformed
it means your sqlite db is corrupted, idk what can be the causes
Yeah, it might be the sqlite db for A1111, might just delete it and try relaunching
alright
So I can relaunch A1111 no problem, no db issues. Hmm
After deleting the db file?
No just running from the Pod.
I did delete the cache db anyway. I'm not sure that was the issue though because it continued to execute on the inferences. Very strange.
ah
So perhaps it's possible that the network volume is taking a long time to attach, it got stuck somehow and the endpoint is billing right away.
But we had set Execution timeout to 600 seconds (default). So 10 minutes max. I would think that would kill the worker. But this went on for 2 hours
Not sure where you are @nerdylive but it is getting late here. I'm really grateful for all your support! I will update the zendesk ticket to fill them in about what we learned here, and maybe that will help us understand what happened to the billing.
I think we'll try to move to direct storage vs network storage moving forward.
Thanks again @nerdylive !
Oh alright
but direct storage on serverless is non persistent
your welcome!
@nerdylive I can start another thread about this, but I understand that the main container disk is non persistent. However I assumed with templates we can spec a volume disk which is persistent? In that case, we could have our dockerfile (or start.sh?) load in/configure A1111 + models onto the volume disk if they don't exist, and then use that between executions?
Anyway, sleep for me! hah
Thanks again
sure yes, put your files in your image, and its there all the time
You should directly access files from your docker image, not move it to container disk ( it'll be more efficient )
@nerdylive btw this is happening again. Nothing is getting picked up from the queue
wew
do you use any extension?
Try checking the webui logs again, check what fails
It's all the stuff from Ashley's image including, controlnet, adetailer etc.
But honestly it just seems like everything is moving very slow on Runpod right now. I haven't changed anything, same configuration and same extensions as I have for the past month. Just the past few days have been really glitchy.
Will check logs now one sec
webui.log is empty
That waiting for service, retrying one?
Ohhhh, geez. I'm sorry. I am out of network volume space. That must be the issue
ahh ic, yeah, what were you doing so that you ran out of space?
Well, it's strange because I had 77% usage of a 65GB network volume. So roughly 15GB free right?
I just downloaded a checkpoint ~7GB.
And suddenly now it's all used up.
Hmm, any failed?
you can check your usage, try searching linux command to check folder size
Yeah I did, somehow venv is using 14G
so its your package, yeah it can use a bunch of space hahahah
I didn't install any new packages, just a new model ~7G. Anyway, yeah something filled that space for sure. I will need to investigate.
I think I'd really like to find a way to stop using network storage, and have a single model per endpoint or something. Do you know what most people do?
Yeah keep checking on folder > subfolder
Yeah thats feasible but seems like its not a good use for your use case
like you put model, files everything in container, it should slow down your start times if your image is bulky
If it needs to then run from network volume anyway, wouldn't it also be the same speed, even slower?
Ah, it's happening again! It seems only one or two workers get stuck like this. Very strange.
Whats on the logs? (webui.log)
Yup just creating a pod again, hah
What do you mean?
Im not the creator of those code, so well its kinda hard to debug too, but what the log says here is only your webui wont start, so it retries
89% still (I deleted an unused model to free up some space) and it hasn't changed since
Super weird, the
cat /workspace/logs/webui.log
keeps changing every time I run it
Like drastically differentoh yea i think it keeps rewriting a new file every worker run (?) not sure
Got a live one.
sqlite3.OperationalError: disk I/O error
seems because out of space from this?
huh nice
The sqlite dbs are only 8MB
AFAIK, it's just the stable diffusion webui cache files
It's strange because we'll run all 10 workers, and it will chew through the queue, but one will get stuck in this state.
Huh
Maybe because conflicting 😂
I'm not sure whats causing this
Hi, did you fix this issue ? I have the same problem
If you want to get a closer look at your network volume run this pod (gives you web file explorer view):
https://runpod.io/console/deploy?template=lkpjizsb08&ref=a57rehc6
Mount the network volume you want to work with when you deploy this template. It should be mounted (to /workspace) By default the username and password will be "admin" and "admin".
everything worked fine few hours ago
I didnt touch anything
why I get this issue now ?
this issue happen with all my storages
it's def an issue from runpod or the template itself
Jupyter works perfectly too for this problem ig..
I think its more A1111 issue when upgrading A1111 from one version to the next.
Oh it causes some kind of sqlite error?
Seems to be the case, but I am not sure.
I think the sqlite DB becomes corrupted when upgrading because the structure changes, but that is just an assumption, someone will need to test it to confirm.
Ye might be
I just switched to fully containerized and dropped network storage altogether, it was too buggy. Dropped my bill from runaway processes from $25/day to $5/day.
Are you using https://github.com/ashleykleynhans/runpod-worker-a1111
?
GitHub
GitHub - ashleykleynhans/runpod-worker-a1111: RunPod Serverless Wor...
RunPod Serverless Worker for the Automatic1111 Stable Diffusion API - ashleykleynhans/runpod-worker-a1111
This issue is due to corrupt files within the venv. It seems to happen when you use more than 1 template for A1111 on the same network storage. It seems that you can fix it as follows:
Step 1: Activate venv
Step 2: Reinstall torch modules and clear
__pycache__
files
Thanks a lot Marcus