Illegal Construction

When building a mock serverless endpoint, to test locally against test_input.json, i am not recieving the --- Starting Serverless Worker | Version 1.6.2 --- log in my container upon run. trying to run python handler.py manually in the exec window of my container returns an Illegal Construction Message.
Am i doing something stupid that is obviously wrong/ has anyone encountered an illegal construction message in their containers when trying to build a serverless endpoint?
27 Replies
ashleyk
ashleyk6mo ago
What do you see when you start it? And what container? You can't use serverless to test test_input.json.
zfmoodydub
zfmoodydub6mo ago
just the usual container start language for the base image: im on mac os, so i get the gpu not available message, but when i deploy on runpod it goes away
No description
ashleyk
ashleyk6mo ago
You can't run this container on your local machine if you don't have a GPU.
zfmoodydub
zfmoodydub6mo ago
where i should be getting something like: 2024-03-11 11:40:35 --- Starting Serverless Worker | Version 1.6.2 --- 2024-03-11 11:40:35 INFO | Starting API server. 2024-03-11 11:40:35 DEBUG | Not deployed on RunPod serverless, pings will not be sent. 2024-03-11 11:40:35 INFO: Started server process [1] 2024-03-11 11:40:35 INFO: Waiting for application startup. 2024-03-11 11:40:35 INFO: Application startup complete. 2024-03-11 11:40:35 INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
ashleyk
ashleyk6mo ago
You don't use containers for local testing. You just start the handler.
zfmoodydub
zfmoodydub6mo ago
very interesting that youd say that, i was of the same impression, but my testing last week proved otherwise to my surprise
ashleyk
ashleyk6mo ago
Test locally | RunPod Documentation
As you develop your Handler Function, you will, of course, want to test it with inputs formatted similarly to what you will be sending in once deployed as a worker. The quickest way to run a test is to pass in your input as an argument when calling your handler file. Assuming your handler function is inside of a file called your_handler.py and y...
zfmoodydub
zfmoodydub6mo ago
either way, i feel like the lack of the "starting serverless worker" log is the first problem, do you not agree? via the last runpod instance i spun up for local testing, i was able to trigger my local container via postman and run the process on my mac m1 machine using the FROM runpod/base:0.4.0-cuda11.8.0 base image i was surprised it worked, probably only used cpu though whereas on runpod, gpus were utilized
ashleyk
ashleyk6mo ago
No because you are doing it wrong, stop trying to start a container. Read the link I sent for local testing, no container involved. FROM runpod/base:0.4.0-cuda11.8.0 This is meant to be used on a GPU pod via runpodctl for development, not on your local machine.
ashleyk
ashleyk6mo ago
RunPod Blog
RunPod's Latest Innovation: Dockerless CLI for Streamlined AI Devel...
Discover the future of AI development with RunPod's Dockerless CLI tool. Experience seamless deployment, enhanced performance, and intuitive design, revolutionizing how you bring AI projects from concept to reality.
ashleyk
ashleyk6mo ago
Running a container locally is completely incorrect.
zfmoodydub
zfmoodydub6mo ago
alright, seems as though i was confused and stumbled upon a pretty utilitarian way to build and test last time then. docs are pretty confusing (maybe just to me) to be fair.
i kind of cant test outside of a container unless i were to start a venv and run all of my required installs in that, and then when im ready to deploy do the same install and config process through my dockerfile. is that the documented gold standard process?
ashleyk
ashleyk6mo ago
I suggest following the blog post above for local development.
zfmoodydub
zfmoodydub6mo ago
given that local testing is not possible for me, and will arduously push and deploy on runpod for each test iteration, i am getting an error in runpod worker logs - error pulling daemon from <username>/image regardless of my local testing issues, something is wrong with my handler.py i believe, and i cannot figure it out as it follows the same structure as a working handler.py for a different project. dockerfile is nothing different either will try cli for a little first
justin
justin6mo ago
Your best bet is honestly, just running the handler.py in a runpod gpu pod, on a pytorch template is what I do I find the local testing not super helpful, unless you have a GPU to run on, and you do an expose all gpu flag
Want results from more Discord servers?
Add your server