Illegal Construction
When building a mock serverless endpoint, to test locally against test_input.json, i am not recieving the
--- Starting Serverless Worker | Version 1.6.2 ---
log in my container upon run.
trying to run python handler.py manually in the exec window of my container returns an Illegal Construction Message.
Am i doing something stupid that is obviously wrong/ has anyone encountered an illegal construction message in their containers when trying to build a serverless endpoint?
Am i doing something stupid that is obviously wrong/ has anyone encountered an illegal construction message in their containers when trying to build a serverless endpoint?
27 Replies
What do you see when you start it?
And what container? You can't use serverless to test
test_input.json
.just the usual container start language for the base image:
im on mac os, so i get the gpu not available message, but when i deploy on runpod it goes away
You can't run this container on your local machine if you don't have a GPU.
where i should be getting something like:
2024-03-11 11:40:35 --- Starting Serverless Worker | Version 1.6.2 ---
2024-03-11 11:40:35 INFO | Starting API server.
2024-03-11 11:40:35 DEBUG | Not deployed on RunPod serverless, pings will not be sent.
2024-03-11 11:40:35 INFO: Started server process [1]
2024-03-11 11:40:35 INFO: Waiting for application startup.
2024-03-11 11:40:35 INFO: Application startup complete.
2024-03-11 11:40:35 INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
You don't use containers for local testing. You just start the handler.
very interesting that youd say that, i was of the same impression, but my testing last week proved otherwise to my surprise
Test locally | RunPod Documentation
As you develop your Handler Function, you will, of course, want to test it with inputs formatted similarly to what you will be sending in once deployed as a worker. The quickest way to run a test is to pass in your input as an argument when calling your handler file. Assuming your handler function is inside of a file called your_handler.py and y...
either way, i feel like the lack of the "starting serverless worker" log is the first problem, do you not agree?
via the last runpod instance i spun up for local testing, i was able to trigger my local container via postman and run the process on my mac m1 machine using the FROM runpod/base:0.4.0-cuda11.8.0
base image
i was surprised it worked, probably only used cpu though
whereas on runpod, gpus were utilized
No because you are doing it wrong, stop trying to start a container.
Read the link I sent for local testing, no container involved.
FROM runpod/base:0.4.0-cuda11.8.0
This is meant to be used on a GPU pod via runpodctl for development, not on your local machine.RunPod Blog
RunPod's Latest Innovation: Dockerless CLI for Streamlined AI Devel...
Discover the future of AI development with RunPod's Dockerless CLI tool. Experience seamless deployment, enhanced performance, and intuitive design, revolutionizing how you bring AI projects from concept to reality.
Running a container locally is completely incorrect.
alright, seems as though i was confused and stumbled upon a pretty utilitarian way to build and test last time then.
docs are pretty confusing (maybe just to me) to be fair.
i kind of cant test outside of a container unless i were to start a venv and run all of my required installs in that, and then when im ready to deploy do the same install and config process through my dockerfile. is that the documented gold standard process?
i kind of cant test outside of a container unless i were to start a venv and run all of my required installs in that, and then when im ready to deploy do the same install and config process through my dockerfile. is that the documented gold standard process?
I suggest following the blog post above for local development.
given that local testing is not possible for me, and will arduously push and deploy on runpod for each test iteration, i am getting an error in runpod worker logs - error pulling daemon from <username>/image
regardless of my local testing issues, something is wrong with my handler.py i believe, and i cannot figure it out as it follows the same structure as a working handler.py for a different project. dockerfile is nothing different either
will try cli for a little first
Your best bet is honestly, just running the handler.py in a runpod gpu pod, on a pytorch template is what I do
I find the local testing not super helpful, unless you have a GPU to run on, and you do an expose all gpu flag
Thats what the cli tool does
Ooo
Thats why I told him to do that
Forgot about that 👁️
makes sense
I have to say, that blog post could be a bit better, its like it was rushed.
Yeah, and should be in the official docs xD. Hoping that backlog item eventually comes thro. Tbh, I think the better way should be any time a feature is released, the engineers who write it need to submit the PR to close the feature.
At least that's what my team does to keep the docs up to date
Yeah exactly otherwise docs are a 2nd class citizen
have read through the cli doc a few times, and pardon my inexperience, but im still not sure if, say, i have a pretty complicated dockerfile, how do the requirements like pip installs/PYTHONPATHS/apt get commands run? do you still use a dockerile or dockerfile equivalent but just build the container with runppod instead of docker?
i feel like that is the information inside the following section of the blog:
"builder/requirements.txt for listing pip dependencies.
runpod.toml"
but ill be struggling with this one for days without more info or an example
which is fine, perhaps i spend a little while on it and can help others like ashleyk after
Yeah, add your python dependencies to
builder/requirements.txt
as you said. I suggest building your own image based on the RunPod base image to install the additional apt packages and then edit the runpod.toml
file to change the image to your own one instead of the RunPod one.
You can add a "Container start command" to your serverless template, but I don't recommend installing apt packages there because it will unnecessarily impact your cold start times, better to build them into the docker image in my opinion.let me know i my thought process is wrong here, but are you saying that even if im using runpodclt dockerless, i should still have a dockerfile?
thats what it sounded like based on your last sentence.
based on the doc i was under the impression that i would not use a dockerfile at all for the cli version.
im now spending time trying to re-work my old dockerfile configuration and setup steps using only requirements.txt and runpod.toml.
that is proving kind of hard as I have a few git clones and apt get installs in my dockerfile.
It depends on the requirements of your application. Dockerless isn't some kind of magic, it just makes the development process easier.
i can see how it would. value prop is i dont have to compile and push twice - once for docker and once for runpod.
i just need to get the hang of the equivalents/differences in what the two configuration vehicles support have never built a toml file as a config vehicle before so its new to me for example, if i need a setup steps that looks like the below: pip install --upgrade "jax[cuda11_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html where should i put this? it doesnt seem to fit in either requirements.txt or toml. when ive added it to requirements.txt i get the following: [xuoo1uf1rd7hx4] ERROR: Invalid requirement: jax[cuda11_pip] --upgrade -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html [xuoo1uf1rd7hx4] main.py: error: no such option: --upgrade
i just need to get the hang of the equivalents/differences in what the two configuration vehicles support have never built a toml file as a config vehicle before so its new to me for example, if i need a setup steps that looks like the below: pip install --upgrade "jax[cuda11_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html where should i put this? it doesnt seem to fit in either requirements.txt or toml. when ive added it to requirements.txt i get the following: [xuoo1uf1rd7hx4] ERROR: Invalid requirement: jax[cuda11_pip] --upgrade -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html [xuoo1uf1rd7hx4] main.py: error: no such option: --upgrade