S3 ENV does not work as described in the Runpod Documention

Hi all, I have a serverless function and also all env variable as its written in documention. But its undefined in the runpod logs. I have added all this example of ENV to my template but none of them are working.
logger.info("env1", os.environ["BUCKET_ENDPOINT_URL"])
logger.info("env2", os.environ.get("BUCKET_ENDPOINT_URL"))
logger.info("env3", os.environ.get("RUNPOD_BUCKET_ENDPOINT_URL"))
logger.info("env4", os.environ["RUNPOD_BUCKET_ENDPOINT_URL"])
logger.info("env1", os.environ["BUCKET_ENDPOINT_URL"])
logger.info("env2", os.environ.get("BUCKET_ENDPOINT_URL"))
logger.info("env3", os.environ.get("RUNPOD_BUCKET_ENDPOINT_URL"))
logger.info("env4", os.environ["RUNPOD_BUCKET_ENDPOINT_URL"])
this snippet is from my rp handler and all are empty strings. this is how i call the handler:
runpod.serverless.start({"handler": handler})
runpod.serverless.start({"handler": handler})
and this is my handler:
def handler(event):
validated_input = validate(event["input"], INPUT_SCHEMA)

logger.info(f"Handler validating input")
if "errors" in validated_input:
return {"error": validated_input["errors"]}

return face_swap_api(event, validated_input["validated_input"])
def handler(event):
validated_input = validate(event["input"], INPUT_SCHEMA)

logger.info(f"Handler validating input")
if "errors" in validated_input:
return {"error": validated_input["errors"]}

return face_swap_api(event, validated_input["validated_input"])
this is also my start.sh script:
#!/usr/bin/env bash

# Export env vars
export_env_vars() {
echo "Exporting environment variables..."
printenv | grep -E '^RUNPOD_|^PATH=|^_=' | awk -F = '{ print "export " $1 "=\"" $2 "\"" }' >> /etc/rp_environment
echo 'source /etc/rp_environment' >> ~/.bashrc
}

echo "Worker Initiated"

echo "Symlinking files from Network Volume"
ln -s /runpod-volume /workspace
rm -rf /root/.cache
rm -rf /root/.ifnude
rm -rf /root/.insightface
ln -s /runpod-volume/.cache /root/.cache
ln -s /runpod-volume/.ifnude /root/.ifnude
ln -s /runpod-volume/.insightface /root/.insightface

echo "Starting RunPod Handler"
export PYTHONUNBUFFERED=1
cd /workspace/runpod

export_env_vars

python3 -u rp_handler.py
#!/usr/bin/env bash

# Export env vars
export_env_vars() {
echo "Exporting environment variables..."
printenv | grep -E '^RUNPOD_|^PATH=|^_=' | awk -F = '{ print "export " $1 "=\"" $2 "\"" }' >> /etc/rp_environment
echo 'source /etc/rp_environment' >> ~/.bashrc
}

echo "Worker Initiated"

echo "Symlinking files from Network Volume"
ln -s /runpod-volume /workspace
rm -rf /root/.cache
rm -rf /root/.ifnude
rm -rf /root/.insightface
ln -s /runpod-volume/.cache /root/.cache
ln -s /runpod-volume/.ifnude /root/.ifnude
ln -s /runpod-volume/.insightface /root/.insightface

echo "Starting RunPod Handler"
export PYTHONUNBUFFERED=1
cd /workspace/runpod

export_env_vars

python3 -u rp_handler.py
it would be nice if someone can help me. thanks
No description
21 Replies
Madiator2011
Madiator20119mo ago
BUCKET_ACCESS_KEY_ID BUCKET_ENDPOINT_URL BUCKET_SECRET_ACCESS_KEY note you need to set them in template and not in handler
Madiator2011
Madiator20119mo ago
No description
Alikarami
AlikaramiOP9mo ago
yes i added them also in the template what do you mean not in handler?
Madiator2011
Madiator20119mo ago
I mean you do not define them in handler file if you use rp.upload just add them in template before deploy
Alikarami
AlikaramiOP9mo ago
I have not defined it in handler, I have only defined it in template before deploy. i only wrote this logger for testing to see if it works. but it is empty string
logger.info("env1", os.environ["BUCKET_ENDPOINT_URL"])
logger.info("env1", os.environ["BUCKET_ENDPOINT_URL"])
Madiator2011
Madiator20119mo ago
is it js?
Alikarami
AlikaramiOP9mo ago
no python
Madiator2011
Madiator20119mo ago
what error do you get?
Alikarami
AlikaramiOP9mo ago
No description
Alikarami
AlikaramiOP9mo ago
so basically Bad Request 400
Madiator2011
Madiator20119mo ago
though I do not see error. I'm not sure what is happening with worker does it not upload image?
Alikarami
AlikaramiOP9mo ago
but you can see down there my env's are empty and rest of code will not work i sent you the rp-handler.py, so bascially i need to download first a image from s3 and im doing that with boto3. Boto need the env and thats why its failed
Madiator2011
Madiator20119mo ago
btw you know you do not need to iplement booto yourself from runpod.serverless.utils import rp_download, upload_file_to_bucket, upload_in_memory_object then use upload_file_to_bucket
Madiator2011
Madiator20119mo ago
GitHub
worker-deoldify/src/handler.py at main · kodxana/worker-deoldify
DeOldify worker for RunPod serverless. Contribute to kodxana/worker-deoldify development by creating an account on GitHub.
Alikarami
AlikaramiOP9mo ago
yes exactly, unfortunately we needed boto because we generated a presigned url with accesskey, then we used rp-download to download it.
def generate_presigned_url(bucket_name, object_key, expiration=3600):
try:
s3_client = boto3.client('s3', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name=aws_region)
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': object_key},
ExpiresIn=expiration)
except ClientError as e:
logger.info("Error generating presigned URL: ", e)
return {
"error": str(e),
"output": traceback.format_exc(),
"refresh_worker": True,
}
return response
def generate_presigned_url(bucket_name, object_key, expiration=3600):
try:
s3_client = boto3.client('s3', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name=aws_region)
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': object_key},
ExpiresIn=expiration)
except ClientError as e:
logger.info("Error generating presigned URL: ", e)
return {
"error": str(e),
"output": traceback.format_exc(),
"refresh_worker": True,
}
return response
source_image_path = rp_download.download_files_from_urls(
event["id"], [source_image_link]
)[0]
source_image_path = rp_download.download_files_from_urls(
event["id"], [source_image_link]
)[0]
ashleyk
ashleyk9mo ago
Are you calling your handler from the Dockerfile or is it called within a bash script like start.sh?
Alikarami
AlikaramiOP9mo ago
its a fork from your runpod-worker-inswapper 😄 so yes its from start-standalon.sh
ashleyk
ashleyk9mo ago
Try adding:
export BUCKET_ENDPOINT_URL
export BUCKET_ENDPOINT_URL
etc into your start.sh script. Make sure to increment the docker image tag as well.
Alikarami
AlikaramiOP9mo ago
ok i test it now, yes i always increment the tag 👍 so i tested but the problem is still there
Alikarami
AlikaramiOP9mo ago
No description
Alikarami
AlikaramiOP9mo ago
still env1 and env2 empty also fyi, it work local on my machine python3 -u rp_handler.py --rp_serve_api
INFO | https://XXXX.s3.eu-central-1.amazonaws.com/ | env1
INFO | https://XXXX.s3.eu-central-1.amazonaws.com/ | env2
INFO | https://XXXX.s3.eu-central-1.amazonaws.com/ | env1
INFO | https://XXXX.s3.eu-central-1.amazonaws.com/ | env2
i have this in my start.sh
export BUCKET_ENDPOINT_URL
export AWS_S3_REGION
export AWS_S3_ACCESS_KEY_ID
export AWS_S3_SECRET_ACCESS_KEY
export AWS_S3_BUCKET_NAME
export BUCKET_ENDPOINT_URL
export BUCKET_ACCESS_KEY_ID
export BUCKET_SECRET_ACCESS_KEY
export BUCKET_ENDPOINT_URL
export AWS_S3_REGION
export AWS_S3_ACCESS_KEY_ID
export AWS_S3_SECRET_ACCESS_KEY
export AWS_S3_BUCKET_NAME
export BUCKET_ENDPOINT_URL
export BUCKET_ACCESS_KEY_ID
export BUCKET_SECRET_ACCESS_KEY
and this is how i call the start.sh:
# Docker container start script
COPY --chmod=755 start_standalone.sh /start.sh

# Start the container
ENTRYPOINT /start.sh
# Docker container start script
COPY --chmod=755 start_standalone.sh /start.sh

# Start the container
ENTRYPOINT /start.sh
an interesting thing i found out now is that when i enter a wrong input to generate a validation error. I don't get a validation error but exactly the same behavior. but get the error locally. so i don't get it
Want results from more Discord servers?
Add your server