Custom Handler Error Logging
I try to add errorlogging to my custom rp_handler.py and use the runpod library for that
as far as i understand i get these messages in the Logs from my Runpod Worker (Logs -> Container Logs) This is true for some of the logger.info messages, they are working in the main function. But if i try to use this inside my function, there are not shown. Also when i try to use statements, its not getting recognized. How to log correctly in custom handlers?
6 Replies
The RunPodLogger is fine, but your logs are dropped if you log too verbosely.
And you get the logs here, not from container logs.
Ok, there are also no logs there. What is too verbosely? i just do some stuff like:
I just get the "Started." Message.
I get the correct logging with this:
but in my handler function, i get nothing. also i tried to use the logging library to write to a file
it also not writing to this file.
Update: i restarted, and now the shorter Messages appear in the logs.
But still not writing to the file, and the short messages are not suitable for debugging.
Update: When i use the Requests Option in runpod and put my payload in there, i get correct logging from my worker, if i use a script and call the api from there, no logging happen.
There must be something wrong with your handler. Works fine for me.
I can use my worker and it logs correctly within the request tab from runpod, but when i post the exact same payload with a script i get no logging. in the end, i get the same response.
so that should be nothing wrong with the handler there?
My logging is working fine, so you must be doing something wrong.
Also this probably won't work
Since the network volume is mounted as
/runpod-volume
and not /workspace
in serverless.
Also ensure you set max workers down to zero and back up again every time you make a change, or it probably won't pick up your changes unless you're incrementing the tag of your docker image.
You also need to make sure all stale workers are gone before sending new requests because unfortunately requests still get sent to stale workers when deploying a new image.Thank you very much for your advice, i will try!