How do I know if I am on the new n8n template or the old template?
Since the db migrations break the n8n (Tini issue), how do I know if a particular service is safe to migrate using the script or not?
87 Replies
Project ID:
34cc6607-4bf8-44c2-bed8-b7e4fbbe3386
34cc6607-4bf8-44c2-bed8-b7e4fbbe3386
thats the super old n8n template from jack, i only know that the newest n8n template from jack works with the new databases, because it deploys with the new databases
It's not super old. It was deployed 5 months ago... So if I migrate this service, it will break my n8n instance? This one powers our entire billing and subscription system.
Is there a guide to migrate from the old template to the new template safely without downtime?
fair point, but in the template space, 5 months is old.
its either that or do a practice migration in another environment
So if I migrate this service, it will break my n8n instance?i honestly have no way of promising you that it wont, there is always a chance something goes wrong, and that was deployed 5 months ago so you could run into that tini issue too.
Is there a guide to migrate from the old template to the new template safely without downtime?while not ideal, there will always be some downtime during a database migration of any kind so im going to be brutally honest and tell you something you don’t want to hear, i would do the migration and then see what happens
Doesn't the script do the migration in all environments? I read that on the help page.
you are right, so ill be honest again, i have not done a migration myself
And you are correct, I do not want to hear that. I am not going to put my subscription business in a position where we can not allow people to subscribe, can not use parts of our product, and potentially fuck up the integrity of our database during the busiest sales month of the year. That would cost me a minimum of 10s of thousands of dollars and destroy my company's (and my) reputation.
maybe you could deploy the new n8n template from jack, and manually migrate your legacy database from the service you showed above, into the database in your new n8n deploy?
I will wait until someone reaches back out to me about what is being done about the database migrations.
im not too sure there is much they can do about your position
besides doing the migrations for you, but thats not something they can offer
Maybe someone on the team has some free time that we can hire them for... idk. Hit my dms if so. Our company will gladly pay to have this infrastructure modified professionally.
How does it break n8n sorry?
https://discord.com/channels/713503345364697088/1193279156708986991
I tried to run this script with my old n8n template and it never started back up. I tried it on a service that as no longer in use to figure out if the migration script would break it.
What was the error? Can you link to the project?
This one (the project mentioned in this thread) was deployed about 8 months after the one from my previous thread, so the versions aren't exactly the same, but it is not the docker version.
https://railway.app/project/e1f04c05-5bea-4d11-b787-1f1e98108f2b
This is the n8n instance that never came back up, but it was dead to our company, because we had major data loss over the authentication credentials issue from the really old versions of the template. I just never deleted it.
Railway
Railway
Railway is an infrastructure platform where you can provision infrastructure, develop with that infrastructure locally, and then deploy to the cloud.
Did you set the envvar?
That it says, in the logs, to set?
to be clear, it wasnt the migration itself that broke it, it was due to some update n8n did in some capacity.
the migration ran and restarted the n8n service, the build pulled the latest n8n image and the config of the old template wasn’t compatible with the brand new image
My above messages were confusing and did not have precise language, let me try again for clarification...
- I had an old instance of N8N using a template that is now over a year old. The service still worked at this point, I just couldn't log into it.
- - As you can see (or maybe not, authorization and everything), before running this migration script, there was NO tini issues in the logs on any of the previous deployment. Just some dumb message to restart execution 42. Never stopped the service from starting.
- - - The server started fine for this duration, I just couldn't log into it.
- After the issues with this template, what we did is we created a new one, with the new N8N template.
- - That is the project id that is referenced in this thread, and it is the one that must, absolutely, be migrated safely.
- I still had the template from this service (e1f04c05-5bea-4d11-b787-1f1e98108f2b) left over, not doing anything, so I wanted to use this as a trial run for the migration.
- This service has never been flashed with a new version of n8n, so the issue is unlikely to be due to an n8n update.
It's probably pulling the latest image
Yup
I guess, what I need to know from an engineering standpoint, is how do I verify that this newer version (that is not the docker version), can be migrated to the new database without errors.
So, you're gonna have issues with Tini. You need to upgrade. Frankly, there's probably CVEs in your current one....
Use the environments feature
can we pause for a second, that screenshot you just sent is not the n8n template you should have deployed
Create a new environment, see if it deploys correctly
The project that needs to be migrated safely is here: 34cc6607-4bf8-44c2-bed8-b7e4fbbe3386
it is not the one jack made, i have vetted the template jack made, i have not looked over the other n8n template that it looks like you have deployed
I used whatever the interface let me 1-click install at the time.
that's a 404
Well, that's what it forked from
The migration script documentation says that it will do all environments, so I do not know how to verify this way.
this is the template you want https://railway.app/template/r2SNX_ i have worked with jack on this one
Yea it'll migrate em. But, if you deploy a new environment, and it works, then you're good to go
Cause, it'll deploy the "latest"
So. You'd deploy new environment, fix any issues, then push those fixes upstream
Deploy a new environment on the same project?
Can I copy it to a different project somehow to be safe?
Yup. It's kinda clunky but you can generate a template for the project and then use that template
Will that copy the database data too or do I need to pg-dump it?
You'd have to pg dump it
it also wont copy any variables
You, in all likelyhood, probably don't want to copy the data for n8n
Ah, yes, you'll have to bulk copy those
Error: Each repo in a template must be a public repo.
If I change it from private to public, will this allow this cause a restart/redeployment?
I don't believe so
But, if it does, you can just cancel the deploy
visibility shouldn't trigger anything though, I'm 99% certain
For future reference, it does not.
Noice
I am deploying and testing the template migration now.
As in, you made a new project?
Can you link me?
Yes, I created the project using the template.
414984f7-f927-4b66-a542-68b1c13ebfb1
I am setting up some sample data for good measure before running the migration.
This template deployed a v2 database automatically 😦
yep all templates have done that for a few months by now
may i have a screenshot of the new project for n8n?
Should really only make a diff for the n8n deployment
With the Tini thing
Like, DATABASE_URL goes to the right place
Is really all that matters
ID?
This exercise was fun but it didn't help me test that the old data wasn't going to get corrupted through the migration script lol
dean, i really dont mean to be pushy, but please deploy jack's n8n template instead
it is feature complete
Can I deploy them side by side in one project?
no, it would need to be another project
If I deploy a new project with the new template and somehow get the postgres db to stay in sync between the two so that it can safely be moved over, is there a way to forward the webhook url from the old service to the new service? Many different services connect to this n8n instance.
are you using a custom domain on the n8n in production?
No. I probably should have.
I wasn't really expecting it to change
The migration doesn't corrupt data
All it does is pg dump/pg restore
We also do verify the DATABASE_URL integrity check on the hashed value, so it won't update if it's not the same as the old one
then you would have to manually update everything that calls n8n with the new railway domain
So it is safe to run the migration script on this project? If so, I will do it.
the data is always going to save, the migration does not delete databases, but neither me or cooper can comment on if you will run into problems with your 5 month old n8n deploy
I really appreciate the help from both of you. I know this is after hours.
i would strongly recommend migrating your old database into the new database deployed with jack's n8n template though
it is a far more complete template than what you are using, and it will stand the test of time far longer than what you have right now
When I attach a custom domain, the old domain will continue to work?
yes
I guess what I have to do is create an entirely new copy of the n8n workflows on a fresh deployment of n8n and just not run a database migration. That seems to be the only safe way this can be done.
you can export the workflows from within n8n, unless the 5 month old n8n service doesnt have that option?
The credentials won't be the same unfortunately I don't think.
if its just an export of the workflows, does that matter?
I mean the credentials for the individual nodes in the workflows won't be the same. I seem to remember that this was a problem when our last n8n instance crashed.
ah gotcha, then that would require manual modifications
We can probably close this ticket. I guess I have to just grind out the copy and paste work for a while.
As a note (and I'm not being pushy I'm just clarifying), this doesn't resolve the issues brought up in the feedback thread, but I don't expect that to be resolved today.
this doesn't resolve the issues brought up in the feedback thread, but I don't expect that to be resolved today.Which one sorry? The URL remapping? Config remapping is like, really hard and not built into the template system yet 😦 The good news is this is one of 3 big ticket items for product this quarter Making the template creation/evolution experience WAY better
That's good to know. Extending the deadline until after it's implemented and works correctly will be an easy solution then without too many troubles.
We wouldn't be able to implement 100% config mapping since it's not versioned ATM and there's no way to "import" it
It would have to exist on this system that we're moving people to :/
Oh, sorry, I read that again and realized it wasn't talking about the database url remapping.
Yea
Which, FWIW, I brought up. 80% no but will know this evening
For sure
!remind me to respond to this in 12 hours
Got it, I will remind you to
respond to this
at Tue, 09 Jan 2024 15:58:06 GMT
To fix the other issue with the migration, what I would need is just that set of credentials that can swap between the two services once the migration is complete.
I still have to do a lot, and I do mean a lot of configuration work manually (this entire thing is costing me at least 90 engineering hours, and I know it's costing you guys more), but at least then we have a solid no-downtime database movement between the two services.
For Postgres?
The URL will still need to be remapped though
Which, is part of the creds
I don't use connection strings, not sure if many people do. Most external services use the full database connection values.
For our mission critical databases, this would allow us to only experience the downtime of the database migration itself, instead of the downtime plus updating 30+ external services to a new set of credentials which could leave my users stranded for hours.
I refuse to believe that I am the only customer that connects their databases outside of one service/project.
Here's how I see a great solution to the messy issue:
-> A user wants to start the migration process (yay migration!)
-> They are given a set of credentials that are used to update all services.
-> -> These credentials are pointing to the OLD database until the migration is done.
-> User confirms that they have updated credentials everywhere
-> Old db is shut down for migration (downtime yes, but data consistency issues, no)
-> New db is provisioned from the old data
-> User confirms that the old data is in the new db (yay!)
-> the credentials now connect to that new database instead
If you ask me, this is how I would have designed this migration process.
Ideally there would be a bidirectional sync so that there's no downtime, but I understand that would require significant engineering that can't be dedicated to this migration process.
This puts all of the responsibility on the end user like me, not a script. And you get the transfer logic between the environment variables for free.
I am not a devops guy or a DB admin, so I might be oversimplifying, but it really seems like it should be this easy:
-> Put a layer in front of the host variable, and the port variable.
-> Create 2 users on the old db and the new db
-> Have some way to switch that layer from the old host variable to the new host variable, same with port, via a UI or a cli.
-> profit?
im not a devops guy or a DB admin either, but what you are talking about, a postgres proxy with seamless switch over? does not seem easy to implement
I would argue it's a hell of a lot easier than dealing with backlash from this disaster. People from this company probably hate me after the last 3 days. If you are going to force people to be inconvenienced, put their businesses and livelihoods at risk, and cost their companies their reputation, it's really the least that could be done.
fair point, and no one "hates" you in the slightest, this is all valid feedback