Having an issue with the first deploy from a commit
Hello.
I am trying to run a discord.js app on Railway. It works great, most times.
However, when a deploy gets triggered on the production branch and the app deploys, some code stops working as expected.
In my app, I have the following code:
some_http_header_value
is either empty or not there at all, causing the fetch to fail.
However, if I just redeploy the build on Railway, it works as expected.
Might there be some weird caching issue?
config.js exists and works in other areas of the app.
Still investigating on my side, but any obvious ideas as to what's happening?22 Replies
Project ID:
a1f9ffe2-f62a-40c6-93ae-a31972cae7bb
a1f9ffe2-f62a-40c6-93ae-a31972cae7bb
I naturally tried to console log the value, which logged as expected, and the app was working as expected.
Solution
set a service variable
NIXPACKS_NO_CACHE
= 1Ahh, trying that
Is the caching different in the production environment compared to a dev environment?
Anyhow, that seems ot have solved it. Thanks!
caching is done per service and isolated to the environment its in, so the service in the development environment would not use the same cache directory as the service in the production environment
Seems to not have fixed it after all. Seeing this in the build log as well
then I'd have to lean more towards unstable code, and not a railway issue
No idea what that could be as I'm importing a static value from a file
Some questions to help narrow down the issue:
- Where is the value coming from? Is it an env var or static as in
const var = 'foo'
?
- Is it empty or not there at all? Are you receiving a null or undefined?
- How do you know fetch()
is failing due to the missing header? Could it be failing for a different reason?- The value is coming from a config.js file, like:
Then imported like this:
- I haven't been able to
console.log(TRN_API_Token)
at the same time as having the issue. I'll leave a debug console.log() in there for a while now. As I mentioned though, I once simply redeployed the build on Railway, and it was working again.
- Sadly I don't fully know this yet, but I don't see any other points of failure. I could be running the same exact code on 2 environments, and one environment would have the fetch() return a 401, while the other would be fine.
It's hard to debug properly when the outcome can change with 0 changes to the code
Okay, I was just able to get the issue while also logging the header value.
The value logs correctly, but I still encounter the 401 response.
This leads me to believe:
- The API I fetch blocks (some) of the Railway IPs
- Something is broken with fetch(), but I doubt it
Is there any way to see the IP of the current deployment?use dig
but keep in mind that a single IP would be shared across any railway deployments
and it is likely that the API service you are calling has only blocked some gcp ip's since us-west1 has throusands of ip's
Just to confirm, would different environments in a project likely have different IPs?
yes, even different deployments, it just depends on what box it lands on
have you said what API you are calling?
Tracker Network
Find your stats for your favorite games - Tracker Network
Tracker Network provides stats, global and regional leaderboards and much more to gamers around the world. Analyze how you play your favorite games and discover how you can get better.
Unless it's IP-blocking or some odd caching on their side (or mine?), I am out of ideas
All the values logged are correct, yet I hit a 401, while Postman request or anything outside Railway works just fine with these exact values
some bad actors on railway might have previously abused tracker's api and got some of the ip's blacklisted
Oh well. Started noting down some IPs and whether they work as expected or not to hopefully send to Tracker Network to check
you could also use a proxy?
Yeah, could try that
Created a quick proxy worker with Cloudflare Workers, and around ~50% of the time it returns 401 in Postman. Definitely seems like something is funky on their side
well that's unfortunate
Glad to say it's been solved on their side. No IPs were being blocked