Stack yaml twenty error 404
Hello guys, I'm trying to teste twenty in my docker swarm with traefik in my cloud server. All services are running but i got 404 in both api.twenty.domain and twenty.domain
Could the team have a look into that stack please? I think It could be nice let people access a stack.yaml to deploy directly on portainer.
The idea is just with this stack run it! without teh need of use .env or makefile docker.
By the way thank you all of your and congrats for this project! So nice! See you !
30 Replies
take a look at my stack, it works for me on portainer: https://stackoverflow.com/a/78727948/697892
Stack Overflow
Setting Up Twenty CRM on AWS Lightsail
I'm interested in using Twenty CRM and am trying to set it up on an AWS lightSail instance. However, I'm fairly new to self-hosting applications and could use some guidance on the process.
can some...
@aficio thank you buddy I will test. So I do have pga into my swarm and i use cloudflare to manage my dns i have to keep it in or I can cut it off?
well IDK whats going on. I deploy many stacks checking out github but this one is upsetting me up
i got compose yaml and env configured traefik and all on but error 404
I guess it totally depends on your use case... you only need the cloudflare service in your docker stack if you are planning on using cloudflare tunnels to expose your app to the internet. i'll tell you a bit about my use case. I have a small site on my business with power redundancy and failover and an on-line ups. i bought an asus pn53 mini pc, it meets my requirements of being power efficient, i could install 64gb of ram, dual m.2 storage, and the cpu capacity was enough for my use case. i installed proxmox on it using zfs mirrored setup for storage redundancy (this is the most important feature of a server in my opinion) which allows me to have the commercial server equivalent of raid 1 mirroing. then i installed portainer on a regular vm on top of proxmox. my server site is on premises (on my office) but I can't afford a dedicated IP or a dedicated internet connection, what i really needed was stability, so i just put a opnsense as my firewall and use fiber as the main internet and starlink as failover, that way i get almost 100% uptime but I don't get a static nor dynamic ip, most residential internet services these days are switching to CGNAT which makes useless to port forward because you don't get a reachable ip assigned to your internet modem. over the years after trying many things to solve this 'hosting on premises on a residential internet' problem i found that the best, fastest, most reliable way to solve the problem is using cloudflare zero trust tunnels. with it you can point any subdomain of your cf hosted domain to a localserver.local:port. you can host different apps on different ports of that portainer installation and it will work very well for different subdomains, each with full ssl outwards through cf. It's free as long as you dont use it to forward video, it should be good for any regular web app except for streaming or PBXes or very specific media services that require hundreds or thousands of available ports.
Now i got a bit carried away, but the corollary is: if you are hosting your app on vultr, aws, etc. you don't need a cloudflare tunnel, remove that docker service from the stack, create an A record on cloudflare for your instance and use that subdomain name for the SERVER_URL.
i wasn't able to make it work with traefik but did not insist too much because i was already using the cloudflare tunnel
interesting! I think I will try it out using cloudflare. Its not hard to do it right?
super simple and incredibly reliable
im using hetzner
I have a docker srwam with a lot off apps
swarm*
so many of then I'm running easily
yeah, sounds good! if you are already hosting a bunch of apps there cloudflare tunnels will work very well to be able to use the app on a regular :443 over https
but this twenty is hard rn to me put it up
don't use traefik just open up a port and use that port with the tunnel maybe?
yeah it took a good week for me
wow
the database was horrible
I was almost there the 3 services running but 404
i wanted to use an external postgres on supabase just to not have to deal with managing a db, ive heard horror stories for people hosting db on docker, so i tried, but i coudn't make it work. it's a weird architecture desicion for me as i saw it... the twenty-db docker is just enabling the graphql extension on postgres, that's ok, but it also creates a user 'twenty' and a specific schema and it seemed to me that the app expected that specific schema and user.... it's a weird side effect in terms of clean code for me... it would be great to be able to just connect a free supabase db that does not require admin privileges, because the admin privileges they use just to create the user twenty and the schema.... weird arch
probably the server url and the port! it really has to be unified, just using cf with the tunnel made the trick for me
you could deploy supabse into your swarm
you know i did try that and and a regular postgres and i tried supabase as a service... none of them worked
it's something to do with the way the app expects to use the user twenty:twenty on a specific schema... weird
yes
I have treid another db
I have pg and msql into my swarm
i was expecting use it in order to avoid install another one and let it consume more resources
yes, it makes sense right? that was exactly my point...
it makes sense
make all sense
its like we could use redis
I do have minio into my swarm
so I could use everything outside the twenty service
since i posted the last stack i did add redis and it seems to work
would be very clean like others
i had to use bitnami/redis because it seems twenty expect redis not to have password
I use n8n with redis and pgbouncer and others but this one need this attention to be very clean
yes
i have another app that work like this
i took off password to work
if it not exposed will be fine
oh, n8n and redis sounds good, i'll have to look into that, my n8n is getting pretty big these days
i liked a lot twenty thats why I'm crazy to turn it online
IDK if you use worpdress, but it with redis is fantastic
i can not find my supabase stack but if I do i will send to you
twenty it really is fantastic, it looks like it's exploding in user adoption, the arch is great, it's minor details really that are inconsistent like this db thing but other than that... i'm happy i made it work. and yes, wp+redis works great. for php stack i use runcloud though... great dev experience... thanks!
well, i try to cut traefik of and add your cloudflare service into the stack
if you remember where i can get cloudflare token tell me please..i will try to find it out rn
it's the global one, not the api key if i remember correctly
i was using an old tunnel version, maybe the new one uses non legacy api tokens... not sure!
i found it on trsute zero
truste*
yeah that's the one
Thanks for the feedback! Right now, we need a twenty postgres super-user to create extensions (mainly pg_graphql that we plan to deprecate but that's a long term effort, and foreign data_wrappers). pg_graphql is not supported on most cloud providers and that's why we are providing a container in postgres. I think that supabase is doing the same under the hood. If supabase already has the pg_graphql extension enabled + foreign data wrappers, you don't need twenty super user and you can use supabase db 🙂
Our vision about how to ease the setup is the following: we will create a basic admin UI that ony requires a few env variables to be set and no super admin priviledge. This admin UI will guide you through the setup process and upgrade process which also painful at the moment
Regarding Cloudflare, I also think it's a great solution if you don't want to deal with SSL and DNS setup. It's what we are using in production too