Daan
Daan
RRunPod
Created by Daan on 1/2/2024 in #⚡|serverless
Problem with venv
Hi, This is my handler.py file:
#!/usr/bin/env python3
""" Example handler file. """

import runpod
import subprocess

# If your handler runs inference on a model, load the model here.
# You will want models to be loaded into memory before starting serverless.


def handler(job):

return run_shell_script('./entrypoint.sh')

def run_shell_script(script_path):
try:
result = subprocess.run([script_path], check=True, text=True, capture_output=True)
return result.stdout
except subprocess.CalledProcessError as e:
return e.stderr


runpod.serverless.start({"handler": handler})
#!/usr/bin/env python3
""" Example handler file. """

import runpod
import subprocess

# If your handler runs inference on a model, load the model here.
# You will want models to be loaded into memory before starting serverless.


def handler(job):

return run_shell_script('./entrypoint.sh')

def run_shell_script(script_path):
try:
result = subprocess.run([script_path], check=True, text=True, capture_output=True)
return result.stdout
except subprocess.CalledProcessError as e:
return e.stderr


runpod.serverless.start({"handler": handler})
when entrypoint.sh has this code:
export TRANSFORMERS_CACHE=/workspace



rm -rf /workspace && ln -s /runpod-volume /workspace

source /workspace/.venv/bin/activate


cd workspace/.venv/bin

ls
export TRANSFORMERS_CACHE=/workspace



rm -rf /workspace && ln -s /runpod-volume /workspace

source /workspace/.venv/bin/activate


cd workspace/.venv/bin

ls
it works fine and gives
{
"delayTime": 3917,
"executionTime": 1082,
"id": "3d4277ce-31d8-442c-830b-8817be24ecae-e1",
"output": "Activate.ps1\naccelerate\naccelerate-config\naccelerate-estimate-memory\naccelerate-launch\nactivate\nactivate.csh\nactivate.fish\nconvert-caffe2-to-onnx\nconvert-onnx-to-caffe2\ndistro\nf2py\nhttpx\nhuggingface-cli\nisympy\nnormalizer\nopenai\npip\npip3\npip3.11\npython\npython3\npython3.11\nspacy\ntorchrun\ntqdm\ntransformers-cli\nweasel\n",
"status": "COMPLETED"
}
{
"delayTime": 3917,
"executionTime": 1082,
"id": "3d4277ce-31d8-442c-830b-8817be24ecae-e1",
"output": "Activate.ps1\naccelerate\naccelerate-config\naccelerate-estimate-memory\naccelerate-launch\nactivate\nactivate.csh\nactivate.fish\nconvert-caffe2-to-onnx\nconvert-onnx-to-caffe2\ndistro\nf2py\nhttpx\nhuggingface-cli\nisympy\nnormalizer\nopenai\npip\npip3\npip3.11\npython\npython3\npython3.11\nspacy\ntorchrun\ntqdm\ntransformers-cli\nweasel\n",
"status": "COMPLETED"
}
28 replies
RRunPod
Created by Daan on 1/1/2024 in #⚡|serverless
Issue with Dependencies Not Being Found in Serverless Endpoint
I am encountering an issue with a network volume I created: First,I created a network volume and used it to set up a pod. During this setup, I modified the network volume: In the directory where the network volume was mounted, I created and activated a virtual environment (venv). I then installed various dependencies in this environment. Then, I have created a serverless endpoint that utilizes this network volume. As far as I understand, this network volume is mounted on the directory runpod-volume. I initiate the venv located in this directory and then start a program that is also stored there. However, I soon encounter a problem: the dependencies that I had installed are not being found. Could you please help me identify where I might be going wrong in this process? It seems like the dependencies installed in the venv are not being recognized or accessed by the serverless endpoint. Thanks
24 replies
RRunPod
Created by Daan on 1/1/2024 in #⛅|pods
install in network volume
Hello, I want to install some dependencies, I run "pip install -r requirements.txt" in the directory where my network volume is mounted. However, I notice that these are not installed on the newtork volume, but on the container disk (so not permanently). How can I ensure that this is installed on the network volume and therefore preserved?
11 replies
RRunPod
Created by Daan on 12/30/2023 in #⚡|serverless
Possible error in docs: Status of a job with python code
In the docs, there is this command to retrieve the status of a submitted job:
curl https://api.runpod.ai/v2/<your-api-id>/status/<your-status-id>
curl https://api.runpod.ai/v2/<your-api-id>/status/<your-status-id>
And in the docs, this should be the equivalent python code:
# this requires the installation of runpod-python
# with `pip install runpod-python` beforehand

import runpod

runpod.api_key = "xxxxxxxxxxxxxxxxxxxxxx" # you can find this in settings

endpoint = runpod.Endpoint("ENDPOINT_ID")

run_request = endpoint.run(
{"prompt": "a cute magical flying dog, fantasy art drawn by disney concept artists"}
)

print(run_request.status())
# this requires the installation of runpod-python
# with `pip install runpod-python` beforehand

import runpod

runpod.api_key = "xxxxxxxxxxxxxxxxxxxxxx" # you can find this in settings

endpoint = runpod.Endpoint("ENDPOINT_ID")

run_request = endpoint.run(
{"prompt": "a cute magical flying dog, fantasy art drawn by disney concept artists"}
)

print(run_request.status())
But I do not want to start a job again, I just want to retrieve the status of the job (by the job_id, enpoint_id and api_key). I gues this is a small error in the docs, or isn't there a way to retrieve the status of a job by these parameters, as it is possible with curl?
13 replies