Migration failed at "Migrate data" step
I'm trying to migrate my postgre database to the latest version but I have this error.
Project id : a7345c66-bdc3-498c-a8eb-04d21132af95
31 Replies
Project ID:
a7345c66-bdc3-498c-a8eb-04d21132af95
what do the logs of the migration service say?
==== Dumping database from PLUGIN_URL ====
pg_dump: warning: there are circular foreign-key constraints on this table:
pg_dump: detail: hypertable
pg_dump: hint: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
pg_dump: hint: Consider using a full dump instead of a --data-only dump to avoid this problem.
pg_dump: warning: there are circular foreign-key constraints on this table:
pg_dump: detail: chunk
pg_dump: hint: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
pg_dump: hint: Consider using a full dump instead of a --data-only dump to avoid this problem.
[ OK ] Successfully saved dump to plugin_dump.sql
Dump file size: 15G
==== Restoring database to NEW_URL ====
DO
psql:plugin_dump.sql:1530063: ERROR: could not extend file "base/16384/23528.3": No space left on device
HINT: Check free disk space.
CONTEXT: COPY strategy_reports, line 53127
psql:plugin_dump.sql:1530063: STATEMENT: COPY "public"."strategy_reports" ("id", "avg_bars_in_loss_trade", "avg_bars_in_trade", "avg_bars_in_win_trade", "avg_los_trade", "avg_los_trade_percent", "avg_trade", "avg_trade_percent", "avg_win_trade", "avg_win_trade_percent", "commission_paid", "gross_loss", "gross_loss_percent", "gross_profit", "gross_profit_percent", "largest_los_trade", "largest_los_trade_percent", "largest_win_trade", "largest_win_trade_percent", "margin_calls", "max_contracts_held", "net_profit", "net_profit_percent", "number_of_losing_trades", "number_of_wining_trades", "percent_profitable", "profit_factor", "ratio_avg_win_avg_loss", "total_open_trades", "total_trades", "timeframe", "long_only", "short_only", "max_strategy_draw_down", "open_pl", "buy_hold_return", "sharpe_ratio", "sortino_ratio", "max_strategy_draw_down_percent", "max_strategy_run_up", "buy_hold_return_percent", "open_pl_percent", "max_strategy_run_up_percent", "from_date", "to_date", "trades", "history_buy_hold", "history_draw_down", "history_draw_down_percent", "history_equity", "history_equity_percent", "history_buy_hold_percent", "commission_value", "commission_type", "from_date_trading", "to_date_trading", "default_quantity_type", "default_quantity_value", "last_100_trades", "last_60_days_profit_factor", "last_60_days_total_trades", "use_bar_magnifier", "last_60_days_net_profit_percent", "created_at", "updated_at", "created_by_id", "updated_by_id", "t_statistic", "p_value", "standard_deviation_of_returns", "statistical_relevancy_score", "excess_return_percent", "annualized_rate_of_return", "avg_trade_duration_ms", "strategy_type", "history_cumulative_returns_percent_timed", "history_draw_down_percent_timed", "history_cumulative_buy_hold_returns_percent_timed", "alpha", "unique_key", "public", "pyramiding") FROM stdin;
[ ERROR ] Failed to restore database to postgresql://postgres:[email protected]:5432/railway.
please use this https://bookmarklets.up.railway.app/log-downloader/
how big is your legacy database?
If it's the Memory metrics, 6GB
nope im talking about the size of the data you have stored in the database
new databases are limited to 5gb on the hobby plan, do you think you have more data than that stored in your legacy database?
Yes it's at least 8-10 GB
then you would need to upgrade to pro for access to 50gb volumes, then you could rerun the migration
Ok thanks
Migration has completed successfully but I now have two Postgres Legacy service, it this normal ?
it's not unheard of, delete the postgres legacy that has the volume
the actual legacy database will not have a volume
Sorry is this the 3rd one starting from the left ?
tbh not what I thought it would look like, is that postgres legacy service with a volume the newest database with a 50gb volume?
Yes it wasn't here before
but is that postgres legacy service with a volume the newest database with a 50gb volume?
I'm not sure to be honest, how can I check ?
did someone else run the migration for you?
No I did
I checked by clicking on the volume and I guess it's the newest database because the other Postgres Legacy service has logs for 7+ days, if that's what's asked. I don't have any other postgres db on this project
is all your data in the new postgres legacy service?
is the volume on that postgres legacy service a 50gb volume?
Yes it seems like it's complete, number of rows are correct
It's a 50GB volume yes
okay then you can just rename it to
Postgres
Ok
make sure you are using variable references to connect your apps to the new database, once that is done and you are absolutely sure everything made it into the new database you are free to delete the old one
Okay thanks you.
This current month I had crazy egress cost that I think are due to my scrapping service sending data to Strapi that then communicate to Postgres. Will private networking solve this issue ? I activated it and replaced the old host with the private host in my scrapping script (using private network to communicate to strapi)
there are no egress fees on the private network when you do service to service data transfer, so using it wherever possible will definitely reduce your egress costs
Ok
Thanks for the fast support, it's appreciated
no problem!