Illegal Construction

When building a mock serverless endpoint, to test locally against test_input.json, i am not recieving the --- Starting Serverless Worker | Version 1.6.2 --- log in my container upon run. trying to run python handler.py manually in the exec window of my container returns an Illegal Construction Message.
Am i doing something stupid that is obviously wrong/ has anyone encountered an illegal construction message in their containers when trying to build a serverless endpoint?
27 Replies
ashleyk
ashleyk9mo ago
What do you see when you start it? And what container? You can't use serverless to test test_input.json.
zfmoodydub
zfmoodydubOP9mo ago
just the usual container start language for the base image: im on mac os, so i get the gpu not available message, but when i deploy on runpod it goes away
No description
ashleyk
ashleyk9mo ago
You can't run this container on your local machine if you don't have a GPU.
zfmoodydub
zfmoodydubOP9mo ago
where i should be getting something like: 2024-03-11 11:40:35 --- Starting Serverless Worker | Version 1.6.2 --- 2024-03-11 11:40:35 INFO | Starting API server. 2024-03-11 11:40:35 DEBUG | Not deployed on RunPod serverless, pings will not be sent. 2024-03-11 11:40:35 INFO: Started server process [1] 2024-03-11 11:40:35 INFO: Waiting for application startup. 2024-03-11 11:40:35 INFO: Application startup complete. 2024-03-11 11:40:35 INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
ashleyk
ashleyk9mo ago
You don't use containers for local testing. You just start the handler.
zfmoodydub
zfmoodydubOP9mo ago
very interesting that youd say that, i was of the same impression, but my testing last week proved otherwise to my surprise
ashleyk
ashleyk9mo ago
Test locally | RunPod Documentation
As you develop your Handler Function, you will, of course, want to test it with inputs formatted similarly to what you will be sending in once deployed as a worker. The quickest way to run a test is to pass in your input as an argument when calling your handler file. Assuming your handler function is inside of a file called your_handler.py and y...
zfmoodydub
zfmoodydubOP9mo ago
either way, i feel like the lack of the "starting serverless worker" log is the first problem, do you not agree? via the last runpod instance i spun up for local testing, i was able to trigger my local container via postman and run the process on my mac m1 machine using the FROM runpod/base:0.4.0-cuda11.8.0 base image i was surprised it worked, probably only used cpu though whereas on runpod, gpus were utilized
ashleyk
ashleyk9mo ago
No because you are doing it wrong, stop trying to start a container. Read the link I sent for local testing, no container involved. FROM runpod/base:0.4.0-cuda11.8.0 This is meant to be used on a GPU pod via runpodctl for development, not on your local machine.
ashleyk
ashleyk9mo ago
RunPod Blog
RunPod's Latest Innovation: Dockerless CLI for Streamlined AI Devel...
Discover the future of AI development with RunPod's Dockerless CLI tool. Experience seamless deployment, enhanced performance, and intuitive design, revolutionizing how you bring AI projects from concept to reality.
ashleyk
ashleyk9mo ago
Running a container locally is completely incorrect.
zfmoodydub
zfmoodydubOP9mo ago
alright, seems as though i was confused and stumbled upon a pretty utilitarian way to build and test last time then. docs are pretty confusing (maybe just to me) to be fair.
i kind of cant test outside of a container unless i were to start a venv and run all of my required installs in that, and then when im ready to deploy do the same install and config process through my dockerfile. is that the documented gold standard process?
ashleyk
ashleyk9mo ago
I suggest following the blog post above for local development.
zfmoodydub
zfmoodydubOP9mo ago
given that local testing is not possible for me, and will arduously push and deploy on runpod for each test iteration, i am getting an error in runpod worker logs - error pulling daemon from <username>/image regardless of my local testing issues, something is wrong with my handler.py i believe, and i cannot figure it out as it follows the same structure as a working handler.py for a different project. dockerfile is nothing different either will try cli for a little first
justin
justin9mo ago
Your best bet is honestly, just running the handler.py in a runpod gpu pod, on a pytorch template is what I do I find the local testing not super helpful, unless you have a GPU to run on, and you do an expose all gpu flag
ashleyk
ashleyk9mo ago
Thats what the cli tool does
justin
justin9mo ago
Ooo
ashleyk
ashleyk9mo ago
Thats why I told him to do that
justin
justin9mo ago
Forgot about that 👁️ makes sense
ashleyk
ashleyk9mo ago
I have to say, that blog post could be a bit better, its like it was rushed.
justin
justin9mo ago
Yeah, and should be in the official docs xD. Hoping that backlog item eventually comes thro. Tbh, I think the better way should be any time a feature is released, the engineers who write it need to submit the PR to close the feature. At least that's what my team does to keep the docs up to date
ashleyk
ashleyk9mo ago
Yeah exactly otherwise docs are a 2nd class citizen
zfmoodydub
zfmoodydubOP9mo ago
have read through the cli doc a few times, and pardon my inexperience, but im still not sure if, say, i have a pretty complicated dockerfile, how do the requirements like pip installs/PYTHONPATHS/apt get commands run? do you still use a dockerile or dockerfile equivalent but just build the container with runppod instead of docker? i feel like that is the information inside the following section of the blog: "builder/requirements.txt for listing pip dependencies. runpod.toml" but ill be struggling with this one for days without more info or an example which is fine, perhaps i spend a little while on it and can help others like ashleyk after
ashleyk
ashleyk9mo ago
Yeah, add your python dependencies to builder/requirements.txt as you said. I suggest building your own image based on the RunPod base image to install the additional apt packages and then edit the runpod.toml file to change the image to your own one instead of the RunPod one. You can add a "Container start command" to your serverless template, but I don't recommend installing apt packages there because it will unnecessarily impact your cold start times, better to build them into the docker image in my opinion.
zfmoodydub
zfmoodydubOP9mo ago
let me know i my thought process is wrong here, but are you saying that even if im using runpodclt dockerless, i should still have a dockerfile? thats what it sounded like based on your last sentence. based on the doc i was under the impression that i would not use a dockerfile at all for the cli version. im now spending time trying to re-work my old dockerfile configuration and setup steps using only requirements.txt and runpod.toml. that is proving kind of hard as I have a few git clones and apt get installs in my dockerfile.
ashleyk
ashleyk9mo ago
It depends on the requirements of your application. Dockerless isn't some kind of magic, it just makes the development process easier.
zfmoodydub
zfmoodydubOP9mo ago
i can see how it would. value prop is i dont have to compile and push twice - once for docker and once for runpod.
i just need to get the hang of the equivalents/differences in what the two configuration vehicles support have never built a toml file as a config vehicle before so its new to me for example, if i need a setup steps that looks like the below: pip install --upgrade "jax[cuda11_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html where should i put this? it doesnt seem to fit in either requirements.txt or toml. when ive added it to requirements.txt i get the following: [xuoo1uf1rd7hx4] ERROR: Invalid requirement: jax[cuda11_pip] --upgrade -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html [xuoo1uf1rd7hx4] main.py: error: no such option: --upgrade
Want results from more Discord servers?
Add your server