Serverless on Active State behaviour
Some APIs I was using on serverless were working on active and idle state before, now it seems to break the server when I switch to active, the response is always the same as the one before, or only finished.
I want to debug what is happening, can someone explain how the state work internally on the handler after it's awake?
What will stay in memory?
Will it run the entrypoint.sh only once correct?
Will it send only start signal once or for every task?:
runpod.serverless.start({
"handler": handler
})
6 Replies
can you make sure your using latest sdk?
If you have FlashBoot enabled, it could cause the results you are getting. FlashBoot caches things in the worker and the entrypoint.sh or whatever docker start command you're using won't necessarily be run only once, it can be run every time your endpoint receives a request if you aren't sending a constant flow of requests to your endpoint. Maybe you can also provide more details because its really difficult to try to figure out what the issue is from your explanation.
I've updated the docker with the latest runpod, is there any other SDK I need to check on?
Yes, I notice with Flashboot and now I removed it
@ashleyk is there a way to know internally on handler.py the status of the worker for example if it's a cold start or active worker?
No, don't think so. Maybe @flash-singh can advise
nope worker itself is not aware
ok, so in theory, it will have entrypoint.sh loaded, and it will run Finished event signal once, or everytime there is a task?
Is handler.py still in memory correct?
Asking how I can keep models in memory to speed up (or on which loading part of the process)
we can close this I figure the solution, and removing FlashBoot solved the duplicated responses