Bare Meta East Issue
I went to try the metal east region today but it seemed to ignore my config-as-code and forced things onto nixpacks instead of my dockerfiles. Is that expected?
37 Replies
Project ID:
41f3e5b0-3233-406e-938c-ed0cbe426163
41f3e5b0-3233-406e-938c-ed0cbe426163
I reverted back to non-metal and things re-deployed properly
service id please
where do I find that?
or is it just the name?
in the url
16dac37f-5029-40d0-9aa2-343a1d24c3fb looks like?
you tell me, is that the service you are having trouble with?
it is a part of the url of one of the 4 services that had the issue; just not sure it is the right url piece
it was the bit after
service/
its not on metal?
not any longer
reverted so it worked again
fair enough
22141b7a-9c6e-4bc9-8dd1-d110e34d4ec3 was a build for that service that was attempted on metal
can we revert back to metal?
i can revert a different service. sec
service: ad2c318e-3979-430b-a21b-8d7e3f15eab0
it used a dockerfile
hrm
ill tell you if its really on metal once it deploys
yes it is
build
5cf3a139-57a3-45a5-a839-45ab51ac3f04
of that service was the original failure that tried to use nixpacks somehowwanna put the other service on metal now?
sure, can try the others to see if they start working
lets see what happens
it also used a dockerfile
yeah
also getting a weird error about port binding that I have to track down though. it is going to fail
so problem fixed?
well i haven't gotten a clean metal deploy for the one and still no explanation of why I got random nixpack builds in the middle, so probably will go back to non-metal for now for stability
alrightly, let me know if this happens again!
ah, figured out my port issue. occasionally, the random railway PORT would be 8080, which is what my service behind the proxy binds to. other times it was 8081 or something else and everything was hunky-dory
i dont see anywhere in our code that would set it to 8181, its hardcoded to 8080
hm. pretty sure PORT used to be if not randomized, at least not static and we could force-set it
anyhow, forcing it to 8081 fixed that for me
yes it was random on the legacy runtime
any idea when it might be possible to move databases to metal btw? with things in non-metal US West, i get api response times of ~110ms; swapping the service to metal west approx doubles it; swapping to metal east ~doubles again (understandably).
I suspect it is database latency, but haven't dug a lot into my tracing yet
are you connecting to the database over the private network?
should be
do you have tracing?
trying to see if it recorded any of them (it is heavily sampled for cost)
but to answer the question, i dont have an exact timeline, but our hardware for storage is in shipping
ok, thanks. I'll keep an eye if I can get a good trace too. Calling it a night for now
sounds good!