R
Railway11mo ago
candide

NocoDB: deployment crash, migration failure, current transaction is aborted

Hi! I'm suffering from a migration failure and crashed NocoDb deployment on Railway. It says that current transaction is aborted, commands ignored until end of transaction block. Project ID: 611ca5b0-4e20-46b7-bbc8-993d9ad7024d
41 Replies
Percy
Percy11mo ago
Project ID: 611ca5b0-4e20-46b7-bbc8-993d9ad7024d
candide
candideOP11mo ago
Here are the Deploy logs:
Migration from 0108002 to 0111005 failed ────────────────────────────────────┐│ ││ Error ││ ----- ││ error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on ││ "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is ││ aborted, commands ignored until end of transaction block ││ at Parser.parseErrorMessage ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:287:98) ││ at Parser.handlePacket ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:126:29) ││ at Parser.parse ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:39:38) ││ at Socket.<anonymous> ││ (/usr/src/app/node_modules/pg-protocol/dist/index.js:11:42) ││ at Socket.emit (node:events:517:28) ││ at Socket.emit (node:domain:489:12) ││ at addChunk (node:internal/streams/readable:335:12) ││ at readableAddChunk (node:internal/streams/readable:308:9) ││ at Readable.push (node:internal/streams/readable:245:10) ││ at TCP.onStreamRead (node:internal/stream_based
Migration from 0108002 to 0111005 failed ────────────────────────────────────┐│ ││ Error ││ ----- ││ error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on ││ "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is ││ aborted, commands ignored until end of transaction block ││ at Parser.parseErrorMessage ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:287:98) ││ at Parser.handlePacket ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:126:29) ││ at Parser.parse ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:39:38) ││ at Socket.<anonymous> ││ (/usr/src/app/node_modules/pg-protocol/dist/index.js:11:42) ││ at Socket.emit (node:events:517:28) ││ at Socket.emit (node:domain:489:12) ││ at addChunk (node:internal/streams/readable:335:12) ││ at readableAddChunk (node:internal/streams/readable:308:9) ││ at Readable.push (node:internal/streams/readable:245:10) ││ at TCP.onStreamRead (node:internal/stream_based
Brody
Brody11mo ago
did the migration itself complete?
candide
candideOP11mo ago
I found these other errors too
No description
No description
candide
candideOP11mo ago
How can I verify this?
Brody
Brody11mo ago
did the migration modal tell you the migration went off without issues?
candide
candideOP11mo ago
I'm a bit of an amateur here: where can I find the migration modal?
Brody
Brody11mo ago
you would have opened it when you went to migrate your database
candide
candideOP11mo ago
Ooooooh, yes. I migrated it last year after the email announcement was sent, and it migrated successfully.
candide
candideOP11mo ago
No description
Brody
Brody11mo ago
is noco still using the legacy database?
candide
candideOP11mo ago
I've checked the variables, and they use the db credentials for the new migrated service
Brody
Brody11mo ago
have you confirned all data is in place in the new database?
candide
candideOP11mo ago
Yes, just checked again now
Brody
Brody11mo ago
what do you have for your raw NC_DB variable? if it contains sensitive information then that means its wrong
candide
candideOP11mo ago
Which one is raw NC_DB?
No description
No description
Brody
Brody11mo ago
would you be interesting in deploying a new nocodb? your deployment is very old
candide
candideOP11mo ago
I'm not opposed to it. How can I deploy a new one while maintaining my db data?
Brody
Brody11mo ago
you would first need to deploy a new nocodb template, then deploy a migration template to copy the data from the current postgres to the new postgres database if thats something you want to do, i could write up a clear guide
candide
candideOP11mo ago
That would be wonderful, thank you 🙏🏿
Brody
Brody11mo ago
okay i will get to that when i have some time! would you like a ping when thats ready?
candide
candideOP11mo ago
Ouf, I see what you mean now by my deployment being very old... The dockerfile from Railway isn't even the same as the one I have in my repo... Do you think it would be enough to just edit my dockerfile to match it to the new one? If not, a ping would be perfect.
Brody
Brody11mo ago
I think you are using variables that nocodb has since renamed, so a migration to a new noco deploy would be best. will ping with guide once I've done that! @canxmedia - didn’t expect it to be such a long guide, take it slow and if you need any clarification, please don’t hesitate to ask hit the character count for my message, here is a gist instead https://gist.github.com/brody192/7689605241b25100135753029fc9484a
candide
candideOP11mo ago
Thank you so much, Brody! I'm currently at the step Monitor the logs of the postgres migration service The deployment crashed, and was looping through the following error:
==== Validating environment variables ====
[ OK ] PLUGIN_URL correctly set
[ OK ] NEW_URL correctly set
==== Checking if NEW_URL is empty ====
psql: error: connection to server at "roundhouse.proxy.rlwy.net" (35.212.138.205), port 45695 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
An error occurred. Exiting...
==== Validating environment variables ====
[ OK ] PLUGIN_URL correctly set
[ OK ] NEW_URL correctly set
==== Checking if NEW_URL is empty ====
psql: error: connection to server at "roundhouse.proxy.rlwy.net" (35.212.138.205), port 45695 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
An error occurred. Exiting...
No description
No description
Brody
Brody11mo ago
looks like you either are using the DATABASE_URL variable from the new postgres database as the NEW_URL variable for the migration, or you got the new and old variables mixed up now remember, railway is not responsible for any data loss that may ocour
candide
candideOP11mo ago
You're right. I used DATABASE_URL for both old and new. Let me try again Should I delete the Postgres Plugin Migration, or just edit the variables?
Brody
Brody11mo ago
you can just edit the variables
candide
candideOP11mo ago
Okay, I just want to make sure I understand the following passage from your guide:
Copy the DATABASE_URL variable from the Postgres database in your old NocoDB project into the note. making sure denote that it is the old database's URL Copy the DATABASE_PRIVATE_URL variable from the Postgres database in your new NocoDB project into the note. making sure denote that it is the old database's URL
I wasn't sure if the last sentence was a typo, so I noted down:
OLD DATABASE_URL: A OLD DATABASE_PRIVATE_URL: B NEW DATABASE_URL: C NEW DATABASE_PRIVATE_URL: D
Am I correct to place the following values? :
PLUGIN_URL: A NEW_URL: D
Brody
Brody11mo ago
100% a typo, my bad (i've fixed the gist) old DATABASE_URL -> PLUGIN_URL new DATABASE_PRIVATE_URL -> NEW_URL
candide
candideOP11mo ago
Awesome, so far so good...
candide
candideOP11mo ago
Well.... Good news: the db migrated once I put the right credentials. Bad news: the deployment failed with a similar issue. I think there's something wrong inside the database's data, but I just don't know what...
[Nest] 10 - 01/12/2024, 3:22:50 AM ERROR [ExceptionHandler] create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block

error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block
[Nest] 10 - 01/12/2024, 3:22:50 AM ERROR [ExceptionHandler] create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block

error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block
No description
Brody
Brody11mo ago
unfortunately theres not too much i can do if the data in the database is malformed might just have to use a fresh nocodb
candide
candideOP11mo ago
Well, I really appreciate the guide and all the help! The updated dockerfile was definitely something I would've missed if it wasn't for your help, and I'm sure the guide will help others. I'm going to try flagging down a NocoDB dev to see if the data error can be fixed, and report back in case someone else has the same error.
Brody
Brody11mo ago
i wish you the best luck! thank you for the trains!
candide
candideOP11mo ago
So, update on this issue: I've been troubleshooting with NocoDB for the last little bit, and here's what was said so far: I was asked to run select * from pg_indexes The NocoDB dev looked over the results and said, "Somehow index list query returns empty list even if there are index and which is causing this issue." I was then asked to do the following:
Please run the following query as well .... replace schema_name with your schema name used in connection. And you should connect to the database you are using as well.
SELECT
f.attname AS cn,
i.relname as key_name,
ix.indnatts, ix.indkey, f.attnum as seq_in_index,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
f.attnotnull as rqd,
p.contype as cst,
p.conname as cstn,
CASE
WHEN i.oid<>0 THEN true
ELSE false
END AS is_index,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique_original,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique,
CASE
WHEN f.atthasdef = 't' THEN pg_get_expr(d.adbin, d.adrelid)
END AS default FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
LEFT JOIN pg_index AS ix ON f.attnum = ANY(ix.indkey) and c.oid = f.attrelid and c.oid = ix.indrelid
LEFT JOIN pg_class AS i ON ix.indexrelid = i.oid
WHERE
c.relkind = 'r'::char
AND n.nspname = '__schema_name__'
and i.oid<>0
AND f.attnum > 0
ORDER BY i.relname, f.attnum;
SELECT
f.attname AS cn,
i.relname as key_name,
ix.indnatts, ix.indkey, f.attnum as seq_in_index,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
f.attnotnull as rqd,
p.contype as cst,
p.conname as cstn,
CASE
WHEN i.oid<>0 THEN true
ELSE false
END AS is_index,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique_original,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique,
CASE
WHEN f.atthasdef = 't' THEN pg_get_expr(d.adbin, d.adrelid)
END AS default FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
LEFT JOIN pg_index AS ix ON f.attnum = ANY(ix.indkey) and c.oid = f.attrelid and c.oid = ix.indrelid
LEFT JOIN pg_class AS i ON ix.indexrelid = i.oid
WHERE
c.relkind = 'r'::char
AND n.nspname = '__schema_name__'
and i.oid<>0
AND f.attnum > 0
ORDER BY i.relname, f.attnum;
My question is: how do I find the schema name to replace and run this query?
candide
candideOP11mo ago
I tried to look it up online and ran the following two queries, but got a bunch of results (I'm using Dbgate):
select schema_name
from information_schema.schemata;
select schema_name
from information_schema.schemata;
and
select nspname
from pg_catalog.pg_namespace;
select nspname
from pg_catalog.pg_namespace;
select schema_name query results
select schema_name query results
Brody
Brody11mo ago
I'm sorry but I have no clue here, I have never done any database management ever only migrations
candide
candideOP11mo ago
No problem! I'll do some more digging and troubleshooting with them, and report the solution here.
Brody
Brody11mo ago
sounds good, I again wish you luck!
candide
candideOP11mo ago
Good news: the issue has been resolved. I didn't need to run that other query after all, just needed to deploy the following image provided by a NocoDB dev, which skips transaction so the deployment works as expected: nocodb/nocodb-timely:0.204.0-pr-7425-20240113-1241 Then, once that deployed successfully, I just needed to switch back to nocodb:latest. Thank you again for your support, Brody! I've been dealing with this issue for months, and I'm sure the fresh deploy also made a difference, somehow. I'll probably refer to that guide in the future, and hopefully it helps someone else.
Brody
Brody11mo ago
oh wow i would not have ever come to that solution in a million years, very happy you where able to get it working!
Want results from more Discord servers?
Add your server