Nginx Reverse Proxy with Private Networking

I am trying to setup an nginx server to reverse proxy two other services backend & frontend, and expose it publicly via this nginx. I set port of both backend & frontend at 80, and both are not public, but I want to merge them such that / serves the frontend service and /api/ serves the backend service. But when I setup the nginx conf so and used the private networking address, it crashes and doesnt work. Please help
155 Replies
Percy
Percy17mo ago
Project ID: 9960c53b-53ce-401e-8708-8404250e7ad9
aswinshenoy
aswinshenoyOP17mo ago
9960c53b-53ce-401e-8708-8404250e7ad9
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
server {

listen 80;
server_name localhost;

server_tokens off;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
proxy_pass http://zivah.railway.internal;
}

location /api {
proxy_pass http://zivani.railway.internal;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;
}

}
server {

listen 80;
server_name localhost;

server_tokens off;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
proxy_pass http://zivah.railway.internal;
}

location /api {
proxy_pass http://zivani.railway.internal;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;
}

}
My nginx config
Brody
Brody17mo ago
try adding resolver fd12::10 in the server block that's the dns resolver the containers use to resolve internal addresses, it seems nginx needs to be explicitly told that untested though
aswinshenoy
aswinshenoyOP17mo ago
thanks, trying out now
Brody
Brody17mo ago
also, it's preferable if you used a non privileged port like 8080 instead of 80
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
resolver doesn't accept ipv6 addresses?
aswinshenoy
aswinshenoyOP17mo ago
Infact my backend was at 8000, and frontend at 3000. But then I set PORT in both the services to 80 hoping to get it working
Brody
Brody17mo ago
maybe this is the syntax you need to use? I'm not too familiar with nginx myself resolver [fd12::10]
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
I am using nginx alpine byw
Brody
Brody17mo ago
shouldn't be a problem, I use alpine or distroless in all my apps
aswinshenoy
aswinshenoyOP17mo ago
went back to the same old error now
Brody
Brody17mo ago
and you're using this syntax now?
aswinshenoy
aswinshenoyOP17mo ago
yes
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
interesting, well I'll do some messing around and get back to you!
aswinshenoy
aswinshenoyOP17mo ago
let me know if you need anymore details,
Brody
Brody17mo ago
will do
aswinshenoy
aswinshenoyOP17mo ago
I tried with Caddy to see if it has to do with nginx, but same result
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
but caddy instead throws a 502 directly and doesnt crash
Brody
Brody17mo ago
https://proxy-production-9e87.up.railway.app/ just let me refine the config a bit and ill send it over soon
aswinshenoy
aswinshenoyOP17mo ago
youre are awesome, thank you ❤️
Brody
Brody17mo ago
frontend: https://proxy-production-9e87.up.railway.app/ backend: https://proxy-production-9e87.up.railway.app/api/ nginx.conf:
worker_processes 5;

worker_rlimit_nofile 8192;

events {
worker_connections 4096;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
tcp_nopush on;
server_names_hash_bucket_size 128;
server_tokens off;

resolver [fd12::10] valid=10s;

server {
listen 3000;
listen [::]:3000;
server_name localhost;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
set $fontend frontend.railway.internal;
proxy_pass http://$fontend:3000;
}

location /api {
return 302 $http_x_forwarded_proto://$host/api/;
}

location ^~ /api/ {
set $backend backend.railway.internal;
proxy_pass http://$backend:3000;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;

rewrite /api(.*) $1 break;
}
}
}
worker_processes 5;

worker_rlimit_nofile 8192;

events {
worker_connections 4096;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
tcp_nopush on;
server_names_hash_bucket_size 128;
server_tokens off;

resolver [fd12::10] valid=10s;

server {
listen 3000;
listen [::]:3000;
server_name localhost;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
set $fontend frontend.railway.internal;
proxy_pass http://$fontend:3000;
}

location /api {
return 302 $http_x_forwarded_proto://$host/api/;
}

location ^~ /api/ {
set $backend backend.railway.internal;
proxy_pass http://$backend:3000;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;

rewrite /api(.*) $1 break;
}
}
}
i haven't tested this super extensively, but it does work
aswinshenoy
aswinshenoyOP17mo ago
let me quickly plguin and check for mycase
Brody
Brody17mo ago
i think i might make a template out of this, but use caddy instead, that nginx config is kinda bulky for what it does since ive seen a good few request from people who have seprate frontend and backend services but want to serve them both from the same domain
aswinshenoy
aswinshenoyOP17mo ago
I guess a little problem?
Brody
Brody17mo ago
show me your dockerfile?
aswinshenoy
aswinshenoyOP17mo ago
yes you should definetly make one, struggled for hours to find it online, and then came to discord
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
even though you have an nginx proxy, would you be interested in the caddy version of this?
aswinshenoy
aswinshenoyOP17mo ago
yes, in our case, we need the backend to be at /api or a sub-directory to use the server-side set cookies, I probably think there are a lot of such usecases
Brody
Brody17mo ago
this is my dockerfile
FROM nginx:1.24.0-alpine-slim

COPY nginx.conf /etc/nginx/nginx.conf
FROM nginx:1.24.0-alpine-slim

COPY nginx.conf /etc/nginx/nginx.conf
aswinshenoy
aswinshenoyOP17mo ago
yes, caddy is nice, but nginx is what the internet will give you answers, so I went ahead and tried for nginx as I believed it probably had best support
Brody
Brody17mo ago
yeah i agree, much more info about nginx than caddy, your choice makes complete sense
aswinshenoy
aswinshenoyOP17mo ago
I see yu are updating the nginx.conf itself, guess it, seeing the http
Brody
Brody17mo ago
ive just used that dockerfile for anything nginx related ive ever done
aswinshenoy
aswinshenoyOP17mo ago
hmm pretty clean
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
is that supposed to be a 503
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
yeah is that okay? that has nothing to do with the proxy also the config i gave you sets nginx to listen on port 3000
aswinshenoy
aswinshenoyOP17mo ago
ahh, wait, it should be 80 / 443 right?
Brody
Brody17mo ago
no please keep it 3000
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
and expose the nginx via 3000 as PORT?
Brody
Brody17mo ago
no just set PORT = 3000 in the service variables
aswinshenoy
aswinshenoyOP17mo ago
yea gotcha
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
https://arena-nginx-production.up.railway.app/api/healthz/ now we have an nginx page and error so someting to do with my config
Brody
Brody17mo ago
your backend is returning 503 though you cant expect your proxy to work, if your backend alone isnt working
aswinshenoy
aswinshenoyOP17mo ago
this is my backend url -> https://zivani-production.up.railway.app/api/healthz/ it works with its own public domain oh you mean the error
Brody
Brody17mo ago
you cant expect your proxy to work, if your backend alone isnt working
aswinshenoy
aswinshenoyOP17mo ago
ok i need to do the migration and check then but it should show this page instead right? or does it only take 200?
Brody
Brody17mo ago
no, nginx will not proxy a 503 through by default, same with caddys proxy
aswinshenoy
aswinshenoyOP17mo ago
hmm got it, let me migrate and and check
Brody
Brody17mo ago
proxy_intercept_errors off;
aswinshenoy
aswinshenoyOP17mo ago
oh this to disable those interception and take it to the page instead? dope
Brody
Brody17mo ago
i think
aswinshenoy
aswinshenoyOP17mo ago
got it to run healthy in the -> https://zivani-production.up.railway.app/api/healthz/ (service url)
aswinshenoy
aswinshenoyOP17mo ago
https://arena-nginx-production.up.railway.app/api/healthz - but in the nginx output -> its still this
Brody
Brody17mo ago
show me the error log for that request please
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
192.168.0.4 - - [25/Jul/2023:18:30:26 +0000] "GET /favicon.ico HTTP/1.1" 502 552 "https://arena-nginx-production.up.railway.app/api/healthz"; "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"
Brody
Brody17mo ago
makes sense wild guess, is this a spring backend
aswinshenoy
aswinshenoyOP17mo ago
nope, django WSGI
Brody
Brody17mo ago
way off
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
this is the command that runs backend (zivani)
Brody
Brody17mo ago
-b [::]:$PORT
aswinshenoy
aswinshenoyOP17mo ago
and in service env, I set PORT = 8000 damn, is that it?
Brody
Brody17mo ago
yeah you need to bind to all interfaces, not just all ipv4 interfaces, since internal networking is ipv6 only
aswinshenoy
aswinshenoyOP17mo ago
giving it a shot here
Brody
Brody17mo ago
you really should be setting that start command in a railway.json file
aswinshenoy
aswinshenoyOP17mo ago
it still went out to build the docker image 🤦‍♂️ , i was thinking to save time
Brody
Brody17mo ago
uh yeah I've talked with railway about that, every little thing you do rebuilds from scratch
aswinshenoy
aswinshenoyOP17mo ago
hmm, this repo doesnt have railway.json yet, we have our infra currently in EKS and manage it with Rancher, but thought we will offload a few things and tryout railway... so far, it looks very promising yeah would have been great if I could literally give a docker image URL
Brody
Brody17mo ago
gotcha
aswinshenoy
aswinshenoyOP17mo ago
its afterall doing the same thing
Brody
Brody17mo ago
...you can
aswinshenoy
aswinshenoyOP17mo ago
building it all the time, just costs railway more $$$ oh can I? how? i missed it We actually build and keep our images in ECR, then EKS picks it up from there, I can actually use those images
Brody
Brody17mo ago
I assume you have your github repo linked to the service, you'd have to unlink the repo and then you'd see the button to add the image, but I don't think it would be applicable for you, it only supports public images and the docker and github image repositories
aswinshenoy
aswinshenoyOP17mo ago
ah they need to support private images,
Brody
Brody17mo ago
I'm sure they will, it is still beta after all
aswinshenoy
aswinshenoyOP17mo ago
GHCR might support private, need to try and see
Brody
Brody17mo ago
yeah but there's no way to give railway credentials to pull a private image okay so what's the status with gunicorn
aswinshenoy
aswinshenoyOP17mo ago
hmm, they should accept somehting like pull secret that k8s does, I have a wild guess they themselves are using k8s internally 192.168.0.2 - - [25/Jul/2023:18:45:20 +0000] "GET /api/healthz/ HTTP/1.1" 404 179 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36" https://arena-nginx-production.up.railway.app/api/healthz/ it just keeps loading now -> gunicorn --timeout 30 --max-requests 1000 --max-requests-jitter 50 --workers 5 -b [::]:80 --log-level=error framework.wsgi is what I gave in
Brody
Brody17mo ago
there is some k8s stuff, but it's being slowly removed
aswinshenoy
aswinshenoyOP17mo ago
hmm
Brody
Brody17mo ago
this says port 80
aswinshenoy
aswinshenoyOP17mo ago
ah 🤦‍♂️ let me set it to 8000 everywhere now
Brody
Brody17mo ago
every service gets a PORT = 8000 variable set, and every service that you can configure to listen on $PORT do so
aswinshenoy
aswinshenoyOP17mo ago
https://zivah-production.up.railway.app/ - the frontend - again works with service's own public URL but then https://arena-nginx-production.up.railway.app/ is a bad gateway
Brody
Brody17mo ago
slow your horses, one thing at a time backend, now that it's listening on port 8000, show me a deploy logs screenshot please
aswinshenoy
aswinshenoyOP17mo ago
2023/07/25 18:54:13 [error] 30#30: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.2, server: localhost, request: "GET /api/healthz/ HTTP/1.1", upstream: "http://[fd12:4f6a:612e::79:437e:9411]:80/healthz/", host: "arena-nginx-production.up.railway.app" :8000 went missing ok my bad, the dockerfile wasnt pushed since I made it to 8000 from 80 2023/07/25 18:56:59 [error] 31#31: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.3, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "http://[fd12:4f6a:612e::ac:6c18:14fb]:3000/favicon.ico", host: "arena-nginx-production.up.railway.app", referrer: "https://arena-nginx-production.up.railway.app/api/healthz/"; ah well it goes to 3000
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
did I mess upsomething again?
Brody
Brody17mo ago
https://arena-nginx-production.up.railway.app/api/healthz/ now returns 404 I know why send as text please
aswinshenoy
aswinshenoyOP17mo ago
worker_processes 5;

worker_rlimit_nofile 8192;

events {
worker_connections 4096;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
tcp_nopush on;
server_names_hash_bucket_size 128;
server_tokens off;

resolver [fd12::10] valid=10s;
proxy_intercept_errors off;

server {
listen 3000;
listen [::]:3000;
server_name localhost;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
set $fontend zivah.railway.internal:3000;
proxy_pass http://$fontend;
}

location /api {
return 302 $http_x_forwarded_proto://$host/api/;
}

location ^~ /api/ {
set $backend zivani.railway.internal:8000;
proxy_pass http://$backend;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;

rewrite /api(.*) $1 break;
}
}
}
worker_processes 5;

worker_rlimit_nofile 8192;

events {
worker_connections 4096;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
tcp_nopush on;
server_names_hash_bucket_size 128;
server_tokens off;

resolver [fd12::10] valid=10s;
proxy_intercept_errors off;

server {
listen 3000;
listen [::]:3000;
server_name localhost;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
set $fontend zivah.railway.internal:3000;
proxy_pass http://$fontend;
}

location /api {
return 302 $http_x_forwarded_proto://$host/api/;
}

location ^~ /api/ {
set $backend zivani.railway.internal:8000;
proxy_pass http://$backend;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;

rewrite /api(.*) $1 break;
}
}
}
Brody
Brody17mo ago
worker_processes 5;

worker_rlimit_nofile 8192;

events {
worker_connections 4096;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
tcp_nopush on;
server_names_hash_bucket_size 128;
server_tokens off;

resolver [fd12::10] valid=10s;
proxy_intercept_errors off;

server {
listen 3000;
listen [::]:3000;
server_name localhost;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
set $fontend zivah.railway.internal:3000;
proxy_pass http://$fontend;
}

location /api {
set $backend zivani.railway.internal:8000;
proxy_pass http://$backend;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;
}
}
}
worker_processes 5;

worker_rlimit_nofile 8192;

events {
worker_connections 4096;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
keepalive_timeout 65;
tcp_nopush on;
server_names_hash_bucket_size 128;
server_tokens off;

resolver [fd12::10] valid=10s;
proxy_intercept_errors off;

server {
listen 3000;
listen [::]:3000;
server_name localhost;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

location / {
set $fontend zivah.railway.internal:3000;
proxy_pass http://$fontend;
}

location /api {
set $backend zivani.railway.internal:8000;
proxy_pass http://$backend;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;
}
}
}
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
frontend time
aswinshenoy
aswinshenoyOP17mo ago
Youre awesome! 2023/07/25 19:04:15 [error] 30#30: *28 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.2, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "http://[fd12:4f6a:612e::ac:6c18:14fb]:3000/favicon.ico", host: "arena-nginx-production.up.railway.app", referrer: "https://arena-nginx-production.up.railway.app/api/healthz/";
Brody
Brody17mo ago
well first
aswinshenoy
aswinshenoyOP17mo ago
2023/07/25 19:05:23 [error] 30#30: *32 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.3, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "http://[fd12:4f6a:612e::ac:6c18:14fb]:3000/favicon.ico", host: "arena-nginx-production.up.railway.app", referrer: "https://arena-nginx-production.up.railway.app/initialize";
Brody
Brody17mo ago
what framework is that frontend
aswinshenoy
aswinshenoyOP17mo ago
NextJS
Brody
Brody17mo ago
what's your start command
aswinshenoy
aswinshenoyOP17mo ago
let me get you the command that runs it, a sec
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
aswinshenoy
aswinshenoyOP17mo ago
do we need that [::] ipv6 thing?
aswinshenoy
aswinshenoyOP17mo ago
something like this?
Brody
Brody17mo ago
next start -H :: -p $PORT
next start -H :: -p $PORT
aswinshenoy
aswinshenoyOP17mo ago
awesome, let me try that now
Brody
Brody17mo ago
make sure you have a service variable PORT = 3000
aswinshenoy
aswinshenoyOP17mo ago
byw, this should be fine right? (like if I want to give a default)
Brody
Brody17mo ago
nope delete line 33, 34
aswinshenoy
aswinshenoyOP17mo ago
how do I set a default?
Brody
Brody17mo ago
^
aswinshenoy
aswinshenoyOP17mo ago
yes I have 🫡 , but it wil take a while to get build with this new package.json
Brody
Brody17mo ago
and what's your new start command
aswinshenoy
aswinshenoyOP17mo ago
this, being deployed
Brody
Brody17mo ago
okay just checking, since you like to change things from what I say
aswinshenoy
aswinshenoyOP17mo ago
service port is already set since a while
aswinshenoy
aswinshenoyOP17mo ago
Works perfectly now!!!
aswinshenoy
aswinshenoyOP17mo ago
❤️
Brody
Brody17mo ago
I'm happy I could help
aswinshenoy
aswinshenoyOP17mo ago
You did an amazing job at helping me
Brody
Brody17mo ago
thank you
aswinshenoy
aswinshenoyOP17mo ago
railways team themselves didnt bother much
aswinshenoy
aswinshenoyOP17mo ago
Now, I can start migrating several of our core services into railway,
Brody
Brody17mo ago
I mean they're right, it is better suited for discord that's your aws costs?
aswinshenoy
aswinshenoyOP17mo ago
Railway should probably get you something
Brody
Brody17mo ago
bro they won't even give me a sticker I've been asking for stickers for so long
aswinshenoy
aswinshenoyOP17mo ago
ahhh!!! I thought initially you were from their support
Brody
Brody17mo ago
I'm just a community member
aswinshenoy
aswinshenoyOP17mo ago
I will mail them, as reply that you helped a lot. And once we hit some bills they probably should value it, (I subsricbed to pro, just to see if they will support) they should recruit you as support
Brody
Brody17mo ago
I'm too silly for them
aswinshenoy
aswinshenoyOP17mo ago
ahh, dont tell so. Where are you from byw? what do you do for a living?
Brody
Brody17mo ago
maple syrup land, and I help people in the railway server for a living
aswinshenoy
aswinshenoyOP17mo ago
silly me had to google one that up!!! ❤️
aswinshenoy
aswinshenoyOP17mo ago
I suggest you make a few templates and contribute them to railway, like the nginx and caddy ones
Brody
Brody17mo ago
haha made you google I have made a few templates, and I'll be adding the caddy template in the future
aswinshenoy
aswinshenoyOP17mo ago
Brody
Brody17mo ago
I currently have 2$
aswinshenoy
aswinshenoyOP17mo ago
hmm...
Brody
Brody17mo ago
I don't know how it works, but oh well pretty damn clean with caddy
{
admin off
persist_config off
auto_https off
log { format console }
}

:{$PORT} {
reverse_proxy frontend.railway.internal:3000

handle_path /api/* {
reverse_proxy backend.railway.internal:3000
}
}
{
admin off
persist_config off
auto_https off
log { format console }
}

:{$PORT} {
reverse_proxy frontend.railway.internal:3000

handle_path /api/* {
reverse_proxy backend.railway.internal:3000
}
}
https://github.com/brody192/reverse-proxy
etrain116
etrain1169mo ago
Sorry to necro this old thread btw
Want results from more Discord servers?
Add your server