FastAPI service health-check fails in IPV6
I'm running a fastAPI service in railway. I setup a
/healthcheck
endpoint. It worked well when I exposed it publicly and bound it to 0.0.0.0:8000
(I've also specified the PORT=8000
env var).
However, I want an nginx reverse proxy to be the only publicly exposed service since that backend service will work alongside a separate nextjs frontend service (that won't be publicly exposed either).
In trying to communicate the service privately though, I changed the host to ::
(same 8000
port) as per the private networking docs: https://docs.railway.app/guides/private-networking#listen-on-ipv6 and railway can't hit the /healthcheck
endpoint anymore and so the build logs say the service never came online. Any thoughts?29 Replies
Project ID:
2cc7bfdb-9bfd-4c76-8d34-28e269d7406f
Project ID: 2cc7bfdb-9bfd-4c76-8d34-28e269d7406f
Uvicorn does not support dual stack binding (IPv6 and IPv4) from the CLI, so while that start command will work to enable access from within the private network, this prevents you from accessing the app from the public domain if needed, I recommend using Hypercorn instead
I don't need to access it from the public domain though
health checks use ipv4
an example hypercorn start command would be
hypercorn main:app --bind [::]:$PORT
Whoa
>health checks use ipv4
Any chance you could mention that in the docs?
of course you'd want to set a fixed PORT service variable
will bring that up to the applicable person
Thanks!
So if I disable the healthchecks all should work well, correct?
yeah but i wouldnt disable the health check, i would use hypercorn
Noted!
Our production systems are battle tested on uvicorn. I can't justify switching over given that railway only has healthchecks over ipv4 though. It would also be a non trivial change in the codebase
thats fair, but without a health check railway wont know when your app is able to handle traffic
Yup, understood. I'm running an MVP to test out the viability of switching our k8s cluster over to railway. If we do decide to pull the trigger on the migration we would consider switching to accomodate for the healthchecks
sounds good, and with the upcoming runtime im sure adding ipv6 capabilities to the health check would be already done by default or easy enough to implement
Nice! That's good to hear!
Do you have a rough ETA for when those changes would land?
runtime v2 is pre-alpha right now, so i dont have any real eta to give you, and in fact the v2 runtime doesnt even support any health checks right now
Coming back to this, my nginx service isn't able to communicate with the fastapi service
lets see the nginx.conf
I've disabled the healthcheck and have nginx pointed to
http://upcodes-backend.railway.internal:8000
nginx is not ideal for this, but i assume you dont want to switch to caddy?
Is there are reason why nginx wouldn't work? Happy to switch if that helps/is easier
nginx is a typical reverse proxy for these things
nginx tries to resolve the domains at first start, this is not ideal for two resaons, the private network is not available at first start, the services do not have static ip(v6) addresses
of course nginx can be configured to not do these things, but caddy doesnt do them by default
Interesting. I'll give caddy a shot. Looks like you configured the template!
yep that template covers your use case, though mine calls the backend endpoint
/api
but thats a simple changeCool! What whould that change look like? I'm not familiar with caddy
/api/*
-> /v0/*
Right but where? Ah nvm I see the Caddyfile in the repo.
yeah you would need to deploy the template and then eject from it
Cool. Yeah, just copy pasted your code into my repo and deployed from there
sounds good