Kuma health checks fail frequently
Recently put up a Kuma service up to monitor two of my services (Bun backend & React frontend with Caddy). I started noticing that frequently the health checks are failing with the following error:
Monitor #1 'API Service': Failing: connect EHOSTUNREACH 35.212.174.161:443
Monitor #2 'Frontend': Failing: connect EHOSTUNREACH 35.212.174.161:443
I'm not noticing any issue (errors) on either of the services that would cause this issue.
Project ID: f627edd3-0d74-45c5-90c2-980c37ee4436Solution:Jump to solution
hey @leke - most of railway's hosts now support volumes on the v2 runtime, set your service to use the v2 runtime and redeploy, you will know you're on the v2 runtime if you see
container event
logs15 Replies
Project ID:
f627edd3-0d74-45c5-90c2-980c37ee4436
switch to the v2 runtime in the service settings
I have switched to v2 on both of those services. Though not on the Kuma service or the postgres instance I have. Will switch those as well and monitor.
sounds good
fwiw another user who had the exact same error switched to v2 and said the errors stopped, and I haven't heard back from them since
Still seem to be having issues.
All services running on v2.
They are always happening at the same time, so not sure if there's an issue with the Kuma service unable to send the requests or the other services unable to receive them.
ohhhh
kuma is not on the v2 runtime
despite you selecting it
hmm, should I switch to legacy then back to v2?
the v2 runtime does not support volumes as of right now, so you are still on legacy
once railway moves to bare metal they will add support for volumes on the v2 runtime
Ah okay, makes sense.
so not much else to do right now, but to set kuma to notify you if a check failed twice instead of once
Yeah will do, thanks!
sorry about the flakey experience!
No worries, I don't care much about the Kuma service in specific, was just worried that the services were going down frequently.
haha nope railway isn't that unstable
Solution
hey @leke - most of railway's hosts now support volumes on the v2 runtime, set your service to use the v2 runtime and redeploy, you will know you're on the v2 runtime if you see
container event
logs