Postgres Migration failed at "Migrating Data"
Hi! I have a legacy Postgres DB, and I've gotten the notice about needing to migrate. I finally went ahead and hit the "Migrate" button, but it failed at the "Migrating Data" step. I've tried hitting "Migrate" a couple more times, and it keeps failing at that step.
I now have a total of 5 different entries (services?) in the project:
- "Postgres Legacy" (status: "Migration errored")
- "Postgres Migration" (status: "Crashed")
- 2x "Postgres Legacy Migration" (status: "Crashed")
- "Postgres" (status: "deployed")
I think the relevant error logs section is:
Project ID is
9a511af8-a819-4274-a1a3-c64a6f7ff883
.
Any suggestions on next steps here? (Also please @
me so I see the ping)100 Replies
Project ID:
9a511af8-a819-4274-a1a3-c64a6f7ff883
how much data did you have in your old database?
not immediately sure. I'm running it as the backend for a Umami analytics instance, and I've had it up since about April
(but also semi-manually exported a bunch of data from my old Google Analytics setup and inserted it into this for Umami migration)
do you think its anywhere near 5gb?
certainly possible. I haven't looked at the DB since I got the initial setup + GA migration step done, so I have no idea how much it's taking up
than thats likely why, the new database services get a 5gb volume
lovely. was there a size limit on the old plugin?
you may need to upgrade to pro for a 50gb volume
not really
soooooo, the migration involves a bit of a downgrade / limitation, then
in a way, yes
ugh.
I'm primarily an FE dev, and I'd opted for this setup to keep it as minimal and hands-off as possible. I really did not want to spend part of my Christmas break figuring out how to deal with backend DB migrations :(
delete the failed stuff, upgrade to pro, run the migration again
as it is, there's also been a very annoying quirk: the DB's memory just keeps rising indefinitely over time, and I've had to work around that by manually restarting the DB service every few days
postgres does tend to do that, not much railway can do about that
okay. so, try deleting everything except "Postgres Legacy"?
correct, please be careful though
heh. anything specific to be "careful" about beyond selecting which item I'm deleting?
thats about it
okay, I'll give it a shot
@Brody fwiw, I did upgrade to Pro and retried. This time the "Migrate Data" stepped seemed to go on much longer, but finally errored, and the "Legacy Migration" service actually says it "crashed an hour ago". Same error:
is the volume 50gb?
ah, no it's not, it's 5
I can hit a button to increase it, but is that going to get used if I hit "Migrate" again?
when you upgraded, where you asked to move some projects over?
yes, both the PG project and the Umami app project
and did you move them over?
yep
then yes grow the volume and restart the migration service
actually you might want to wipe the volume first, who knows what's on it so best to start fresh
yep, it just errored due to not being empty
same error:
did you wipe the volume on the new database?
tried to wipe the "Postgres Data" volume
tried?
I hit the button, said it was wiping it and that it completed
fwiw, the UI currently looks like:
pretty sure the top "Postgres Legacy" is my original plugin
delete all the failed stuff that isn't your original database
and then re run the migration
including the "Postgres data" volume?
yes
stupid question, won't that leave me back where I was with an auto-created new DB that's only 5GB?
nope because now this project is on the pro plan, when the migration creates a new database for you it will automatically be 50gb
why didn't that happen with this last attempt, then?
because this latest attempt was after I'd upgraded to Pro
I'm not sure, I don't have the ability to look into that
ok, I'll give it another shot
@Brody the last attempt last evening failed too.
I just opened up my existing DB with pgadmin and ran this query:
result?
73 GB
oops. that explains it.
honestly not sure what to do next, if the size is way over even what the Pro plan gives mehaha you sure where making all you could of that no database size limit with the legacy databases
@Christian - pro user needs their volume size limit increased in order to migrate their legacy database
Thank you for the flag, Brody!
@acemarke I've increased the maximum size of the volume to 100GB.
You'll only ever be charged for the amount of data actually stored.
Please go ahead and try the migration again
thanks!
if it helps for context:
I maintain the Redux JS libraries. I migrated our docs away from Google Analytics this year to a Umami instance, and I'm hosting both the PG DB and Umami app instances on Railway. As part of that, I also did a lot of work to export several years worth of session hits from GA into Umami, because I wanted to have the historical traffic available just in case I ever wanted to compare.
I frankly hadn't looked at how much data was getting used by the DB until just now.
I think the vast majority of the data is actually the historical values prior to 2023-04. I'll see if I can complete the migration now that you've bumped it, but I'll also look at doing a DB dump (so I have that around), and then deleting rows prior to 2023 or so to save space
thank you!
just to check, which of the current project contents if any should I delete before re-running the migration? (or truncate, etc)
Please try to rerun as-is from the migration status panel in the original Postgres Legacy service
is umami in a different project?
I'm a bit hazy on what "project" means for Railway, specifically, tbh :)
The team consists of two pieces, which I assume are "projects": the Postgres DB, and the Umami app server.
FWIW, last night I made a backup of the full DB locally (8GB zipped, 36GB unzipped), then deleted all historical data prior to 2023 and ran
I'll try running the migration again shortly.
VACUUM FULL
on the tables. The size report inside PG now says it's only 19GB instead of 73GB.I'll try running the migration again shortly.
dashboard > projects > services
projects show up on the dashboard, then within the projects there are services, like umami and postgres
so with that said, is your postgres service in a different project then your umami service?
apparently, yes
you are subjecting yourself to unnecessary egress fees by having your database and umami services in separate projects, if you have both in the same project you would be able to use the private url to connect to the database thus eliminating any database to service egress fees.
I say this because you have a fairly large database so I imagine you could cut costs by having both in the same project and using private networking
oh, that's good to know
can I move the Umami service over?
I imagine it might be easier to move the umami service into the project that contains postgres
does your umami service have a volume
don't think so, no
deploy my umami template into your project that already has the database, delete the database that my template will add, then hook your database up to the new umami service
which template is that?
the only umami template there is
wait do you know railway has templates for things like umami already made?
only vaguely. I set this up back in April and was following bits and pieces from a couple different blog posts. Don't know if these templates existed back then
they did, and there was an umami template back then too
awright, got that template deployed. I'll try deleting the fresh DB, stopping the old project's Umami service, and pointing the new service at the legacy DB (which I haven't tried to re-migrate yet)
also need to switch over the custom semi-domain, I guess:
redux-docs-umami.up.railway.app
as a sanity check, can you show me a screenshot of the project you just deployed the umami template into?
I deleted the fresh Postgres created by the template, and configured Umami to point to
DATABASE_URL
from "Postgres Legacy"and if I visit the URL from that fresh Umami instance, I do see real data that matches my sites:
awesome, so now to prepare yourself for a retry on the migration, delete all the failed stuff and the dangling volumes, and that postgres legacy service with a volume
yeah. also just switched the
redux-docs-umami.up.railway.app
name over to the new instancesounds good
okay, we're down to just this:
rename that database to just Postgres
your volume limit is 100gb so you shouldn't have any problems running the migration now
okay, that migration finally succeeded. now let's see what we've got...
fiddled with the DB URL settings a bit, and it looks like I've got real data showing up in my Umami dashboard again
(it does look like the new Postgres service "only" has 50GB instead of the 100GB that was applied yesterday, but not surprising given that I deleted that instance)
I guess it's safe to delete the old DB plugin now?
have you swapped umami's database url over to the new databases private url?
yeah:
${{Postgres (new).DATABASE_URL}}
use the private url, otherwise you will still pay egrees fees on database to service communication
where do I find that?
nm, settings panel
so after tweaking the name, just
postgres-new.railway.internal
as the database URL?nope, remove the current reference and use the auto complete
ah... sorry, got me confused here. clarify that one?
oh wait,
${{"Postgres (new)".DATABASE_PRIVATE_URL}}
?that's it!
ok, giving that a shot
there's auto complete on all service variables in the same project so you should never have a need to type out these kinds of single variable references
hmm. build logs show it acknowledging that
DATABASE_URL
exists, but unable to connectumami is a docker image, it doesn't have build logs, do you perhaps mean deployment logs?
sorry, yeah
show me the logs please?
gotcha, can you click the eye icon on the database url and make sure it renders properly?
screenshot of your project please
remove the
(new)
from the name of the database, the brackets are causing issuesof course they are :)
and update your variable reference accordingly
okay, that seems to be working
awesome
then I think you're all set?
I think so. safe to delete the old PG plugin now?
if you say all your data made it to the new database in tact, then yes
ok, cool.
thank you very much for all your help. like I said, I've got some experience with backend / full-stack stuff, but these days most of my work is frontend, and I generally avoid messing with backend services as much as possible :) so, much appreciated.
(heh. any chance I could get this volume bumped up to 100GB, since I had to delete the earlier one? :) )
it's not 100gb??
ah must have only been bumped on a single volume and not account wide
I'm not about to do that, I don't work for railway, I'd recommend emailing [email protected] instead, it's not an urgent request so they'd get back to you after the new years
sure. thanks again!
no problem!