Empty Postgres reference env var
I referred the guide on https://blog.railway.app/p/automated-postgresql-backups to create a node cron job in our monorepo. It runs fine, but the backup archives are empty.
But when I try running the
command locally, I am getting a full dump of ~20MB
80 Replies
Project ID:
N/A
N/A
Running the node cron job locally with the right env vars also work fine. Only failing on Railway
Already tried redeploying / restarting the cron service. No luck
Seems like the issue is
BACKUP_DATABASE_URL
not being correct. pg_dump
would still return and an empty *.tar.gz
will be created. Thats whats being uploaded to S3Can someone from the team please look into this?
Also noticed this in one of the network request responses:
@Brody Sorry about the ping. But seems like this post is not getting any attention 😓
bruh it's 4:30 am
damn! sorry about that 🙏
if you require priority support you may want to look into the teams plan
Flagging this thread. A team member will be with you shortly.
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
Bump!
What's your project ID, and where are you setting
${{Postgres.DATABASE_URL}}
from?Service Id:
e041e6bc-e740-4fec-a6dc-41e022086166
Setting the env var using the helper in the UI to set reference vars
Sorry for the late response, I wasn't notified of anything heretotal shot in the dark here but can you create another postgres database, attach a variable reference to the cron service, then run
railway variables
againI'll try that out.
actually, can i create another postgres database alongside the existing one? what would be the reference name for that then? the current db is being used by staging backend api and i dont really want that to go down
doesn't really matter, this is purely just a test to see if the reference gets rendered
oh and yes you absolutely can create more databases of the same type, it won't effect anything
ah right i see the
-1
appendedyep!
no no
you where supposed to add another variable, with a different name
oh mb
try it anyway
railway variables
okay. waiting for redeploy to finish
also not needed
oh okay
assuming you have the cli correctly linked to the cron service, just run the command
still empty. uploading screenshot..
Also noticing this (discrepancy?) when i shell into it
I can see DATABASE_URL in the shell env, but not in the variables command abovc
little correction here, that's a local shell
wdym
railway shell
is a local shellyes, but with service's env in it, right?
correct
so the
railway variables
output should also be same as the service's env?as the service variables in the UI, yes
either way, where is that
DATABASE_URL
variable from?yeah thats the discrepancy i'm noticing
is that something you've set in your own shell?
no my shell prints empty
now that's beyond bizarre
I do have that
DATABASE_URL
env in another service in the same project tho - the backend api servicebut still, separate service
would the
DATABASE_URL
variable that shows up in the railway shell happen to be the same variable that's being referenced in that screenshot?yeah, that's correctly referenced to the correct expected database
not what I meant
sorry 😓
is the variable that's hidden under the red block the same as
the variable for this reference
yes. thats what i meant
I wish I could see the things Ray can see somethings
anything else you want me to try?
yeah but there's just so much back and forth
not your fault, just comes with text based support
I also tried changing
BACKUP_DATABASE_URL
to DATABASE_URL
both in code & env
But still seeing the same thingI now tried directly entering the database connection url in plaintext
Backups are still empty
Seeing something weird on CLI
^ Tried similarly with prod db url which is on heroku, still empty, and similar output as above screenshots above
bump!
bump!
just a friendly reminder, if you require priority support you may want to look into the teams plan
By "empty" do you mean there is an archive in your S3 bucket but it's 0 bytes?
yes
Seems like the issue isBACKUP_DATABASE_URL
not being correct.pg_dump
would still return and an empty*.tar.gz
will be created. Thats whats being uploaded to S3
do you see any connection errors in that template's deployment logs?
no
bump
I looked at your service again and noticed a few things:
- You only have the variables set for the backup service in staging. Is this intended?
- The BACKUP_DATABASE_URL is pointing to your staging instance Postgres. Does it have any data?
- When you tried
railway shell
/ railway variables
, which Railway environment is that running in?1. We have set other variables for other services. But the
BACKUP_DATABASE_URL
variable is only set for the backup service. That is intended.
2. Yes there is data in the staging Postgres instance
3. The screenshots also include prev commands for railway link
, railway service
that chooses the backup service / backend service in the staging envIt's still empty? I can confirm your container sees the database URL correctly. This might be a template issue
yup they're still empty.
@fp any ideas?
Looks like you've forked the repo into a private repo?
If you can replicate the issue with the actual template repo, I can try and take a look.
Because I know a bunch of people using the template and no one's reported any issues.
Yeah we have a private monorepo. So we've forked the template as a subrepo in it.
Also it works very fine when i run it in local from within the monorepo
It could be an issue with how you've configured your monorepo?
i dont think so, because the script does get executed fine on railway and the expected file is being uploaded to s3
monorepo config error should really prevent successful execution, right?
It definitely can
If the template is working fine, something about your custom configuration is wrong
Template repo within our monorepo is executing fine in local and railway. Only the pgdump is empty in railway