R
Railway•12mo ago
avtomonov

Nginx reverse proxy: upstream timed out

Hello, I have 2 services: fastAPI backend and nginx frontend serving as reverse proxy. It works fine when backend starts first and then the frontend, however in the opposite case (for example after a git push backend takes longer to start) Nginx keeps failing with
2023/07/27 23:01:54 [error] 10#10: *83 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.0.2, server: , request: "GET /api/data HTTP/1.1", upstream: "http://[fd12:a13d:e0aa::2:937d:2da]:8000/api/data", host: "..."
2023/07/27 23:01:54 [error] 10#10: *83 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.0.2, server: , request: "GET /api/data HTTP/1.1", upstream: "http://[fd12:a13d:e0aa::2:937d:2da]:8000/api/data", host: "..."
any idea why it could be happening?
7 Replies
Percy
Percy•12mo ago
Project ID: N/A
avtomonov
avtomonov•12mo ago
Here's my nginx config:
error_log stderr info;
pid "${GSK_HOME}/run/nginx/nginx.pid";
daemon off;
working_directory "${GSK_HOME}/run/nginx";

events {
worker_connections 1024;
}

http {
gzip on;
gzip_types text/javascript application/json text/css image/svg+xml;
client_max_body_size 0;
large_client_header_buffers 8 64k;

include /etc/nginx/mime.types;

access_log /dev/stdout;
error_log /dev/stdout;

client_body_temp_path "${GSK_HOME}/run/nginx";
proxy_temp_path "${GSK_HOME}/run/nginx";


proxy_http_version 1.1;
proxy_read_timeout 3600;

proxy_redirect off;
proxy_next_upstream off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

resolver [fd12::10] valid=10s;
proxy_intercept_errors off;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;

server {
listen 7860;
listen [::]:7860;
root ${GSK_DIST_PATH}/frontend/dist;

location / {
try_files $uri $uri/ /index.html;
}

location /api {
client_max_body_size 16G;
proxy_pass http://${GSK_BACKEND_HOST}:${GSK_BACKEND_PORT};

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;
}

}
}
error_log stderr info;
pid "${GSK_HOME}/run/nginx/nginx.pid";
daemon off;
working_directory "${GSK_HOME}/run/nginx";

events {
worker_connections 1024;
}

http {
gzip on;
gzip_types text/javascript application/json text/css image/svg+xml;
client_max_body_size 0;
large_client_header_buffers 8 64k;

include /etc/nginx/mime.types;

access_log /dev/stdout;
error_log /dev/stdout;

client_body_temp_path "${GSK_HOME}/run/nginx";
proxy_temp_path "${GSK_HOME}/run/nginx";


proxy_http_version 1.1;
proxy_read_timeout 3600;

proxy_redirect off;
proxy_next_upstream off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

resolver [fd12::10] valid=10s;
proxy_intercept_errors off;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;

server {
listen 7860;
listen [::]:7860;
root ${GSK_DIST_PATH}/frontend/dist;

location / {
try_files $uri $uri/ /index.html;
}

location /api {
client_max_body_size 16G;
proxy_pass http://${GSK_BACKEND_HOST}:${GSK_BACKEND_PORT};

proxy_connect_timeout 70s;
proxy_send_timeout 86400;
proxy_read_timeout 86400;
send_timeout 86400;
}

}
}
another strange thing that I found while trying to fix it is that if I use
proxy_pass http://backend.railway.internal:8000;
proxy_pass http://backend.railway.internal:8000;
it doesn't work at all with
nginx: [emerg] host not found in upstream "backend.railway.internal" in /app/frontend/nginx.conf:52
nginx: [emerg] host not found in upstream "backend.railway.internal" in /app/frontend/nginx.conf:52
but if I replace it with
proxy_pass http://backend:8000;
proxy_pass http://backend:8000;
it starts to work as described above (until the backend is restarted) @Brody 👋
Brody
Brody•12mo ago
so would you be opposed to ditching nginx and moving to caddy? read the description of this template https://railway.app/template/7uDSyj and if thats something you think would suite your needs, id be happy to help you modify the caddyfile to your needs
avtomonov
avtomonov•12mo ago
thanks, sure, I'll give it a try. Do you know how I can serve static files directly from the caddy container instead of proxying to another service? Kind of what this did in Nginx
root /app/frontend/dist;

location / {
try_files $uri $uri/ /index.html;
}
root /app/frontend/dist;

location / {
try_files $uri $uri/ /index.html;
}
Brody
Brody•12mo ago
untested, but this should do the job
handle {
root * /app/frontend/dist
try_files {path} index.html
file_server
}
handle {
root * /app/frontend/dist
try_files {path} index.html
file_server
}
however, i highly recommend the 3 service approach like the example project shows
avtomonov
avtomonov•12mo ago
@Brody , here's what I'm getting with caddy:
2023/07/28 08:46:28.585 ERROR http.log.error.log0 dial backend.railway.internal:8000: unknown network backend.railway.internal:8000 {"request": {"remote_ip": "10.10.10.12", "remote_port": "56778", "proto": "HTTP/1.1", "method": "GET", "host": "192.168.0.55:6377", "uri": "/api/health", "headers": {"User-Agent": ["Go-http-client/1.1"], "Accept-Encoding": ["gzip"]}}, "duration": 0.000281967, "status": 502, "err_id": "y0uuj15yr", "err_trace": "reverseproxy.statusError (reverseproxy.go:1299)"}
2023/07/28 08:46:28.585 ERROR http.log.error.log0 dial backend.railway.internal:8000: unknown network backend.railway.internal:8000 {"request": {"remote_ip": "10.10.10.12", "remote_port": "56778", "proto": "HTTP/1.1", "method": "GET", "host": "192.168.0.55:6377", "uri": "/api/health", "headers": {"User-Agent": ["Go-http-client/1.1"], "Accept-Encoding": ["gzip"]}}, "duration": 0.000281967, "status": 502, "err_id": "y0uuj15yr", "err_trace": "reverseproxy.statusError (reverseproxy.go:1299)"}
and this is a config I'm using:
{
admin off # theres no need for the admin api in railway's environment
persist_config off # storage isn't persistent anyway
auto_https off # railway handles https for us, this would cause issues if left enabled
log { # runtime logs
format console # set runtime log format to console mode
}
servers { # server options
trusted_proxies static private_ranges # trust railway's proxy
}
}

:{$PORT} { # site block, listens on the $PORT environment variable, automatically assigned by railway
log { # access logs
format console # set access log format to console mode
}

handle_path /api/* {
reverse_proxy backend.railway.internal:8000/api/
}
}
{
admin off # theres no need for the admin api in railway's environment
persist_config off # storage isn't persistent anyway
auto_https off # railway handles https for us, this would cause issues if left enabled
log { # runtime logs
format console # set runtime log format to console mode
}
servers { # server options
trusted_proxies static private_ranges # trust railway's proxy
}
}

:{$PORT} { # site block, listens on the $PORT environment variable, automatically assigned by railway
log { # access logs
format console # set access log format to console mode
}

handle_path /api/* {
reverse_proxy backend.railway.internal:8000/api/
}
}
It shouldn't be an issue with the backend because I can acess it directly https://backend-production-XXXX.up.railway.app/api/health actually my bad, according to caddy docs: Upstream addresses cannot contain paths or query strings
Brody
Brody•12mo ago
yeah as the comments said in the original caddyfile, if your backend does have an /api/ route then you don't want the handle_path block