young90803
container proxy deployed to railway is blocked by Open AI api, but fine when running on my laptop
I have a container based proxy that makes simple calls to OpenAI API. This works fine when I run it locally on my laptop, but when I deploy it to railway, it seems that OpenAI's API blocks requests from the container as I see the error below:
raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPConnectionPool(host='whylogs-container.up.railway.app', port=8000): Max retries exceeded with url: /v1/chat/completions (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x10695a9d0>, 'Connection to whylogs-container.up.railway.app timed out. (connect timeout=600)'
Any ideas on what my be causing this?
I wrote a simple python script to test this. If i replace the host below with 'localhost', it works fine when the container is running on my laptop. doesn't work with the same container deployed to railway.
import openai
openai.api_key = "<replace with your Open AI API KeyXXXXX>"
openai.api_base = "http://whylogs-container.up.railway.app:8000/v1"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "hello world"}
]
)
print(response)
7 replies
exec /bin/bash: exec format error
Hi - I'm trying to deploy a docker container service on railway. Railway uses my dockerfile to build successfully. During the deployment state I get the following error, which I have not seen before when running this locally. I normally deploy the container locally using docker compose up --build, but railway doesn't support docker compose, so I can't go that approach. Any ideas or help appreciated!
8 replies