oldbettie
oldbettie
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
yeah for sure. I am loving railway, we just need to make sure it can handle our users requests at a decent cost before we switch
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
guess I will keep trying to debug. Thanks for your help
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
do you think this is likely a concurrency issue with supabase? actually I know its not that since my sst test actually makes even more connections
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
can be as high as 3 seconds for a single call to supabase when the load test is running. A normal request sent from me loading the page is 400ms
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
yeah let me check honeycomb
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
I am in asia. but that shouldnt effect the time between my front end talking to my backend
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
Supabase is eu-central
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
all in eu
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
its Next.js and Supabase
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
I have actually made loads of improvements in the railway deployment for that reason. It makes half the requests that the sst deploymnet makes.
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
Interesting
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
I agree. Im just not sure why the scaling I have implemented in my server has not impacted my own instance testing. I get the same result 1gb ram 1cpu as I do on 3gb ram, 3cpu. In reality any more then 10 cuncurrent requests I go from a 900ms response to 3-4 seconds. Any more then 30 councurrent requests I go upto 8-10 seconds. Beyond 100 you can see is as high as 11 seconds and it often crashes the server
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
sure. Thanks for helping out
loadtest -c 30 https://utilities.up.railway.app/now -t 20 -k
Requests: 506, requests per second: 101, mean latency: 287.4 ms
Requests: 504, requests per second: 101, mean latency: 287.7 ms
Requests: 503, requests per second: 101, mean latency: 289.3 ms
Requests: 509, requests per second: 102, mean latency: 286.5 ms
Requests: 514, requests per second: 103, mean latency: 284.9 ms
Requests: 1181, requests per second: 135, mean latency: 220.7 ms
Requests: 1185, requests per second: 136, mean latency: 219.5 ms
Requests: 1175, requests per second: 134, mean latency: 220.8 ms
Requests: 1188, requests per second: 136, mean latency: 219.9 ms
Requests: 1191, requests per second: 135, mean latency: 219.7 ms
Requests: 1866, requests per second: 137, mean latency: 220.8 ms
Requests: 1869, requests per second: 137, mean latency: 220.6 ms
Requests: 1861, requests per second: 137, mean latency: 220.3 ms
Requests: 1866, requests per second: 136, mean latency: 221.2 ms
Requests: 1873, requests per second: 137, mean latency: 220.6 ms

Target URL: https://utilities.up.railway.app/now
Max time (s): 20
Concurrent clients: 150
Running on cores: 5
Agent: keepalive

Completed requests: 12723
Total errors: 0
Total time: 20.018 s
Mean latency: 233.6 ms
Effective rps: 636
loadtest -c 30 https://utilities.up.railway.app/now -t 20 -k
Requests: 506, requests per second: 101, mean latency: 287.4 ms
Requests: 504, requests per second: 101, mean latency: 287.7 ms
Requests: 503, requests per second: 101, mean latency: 289.3 ms
Requests: 509, requests per second: 102, mean latency: 286.5 ms
Requests: 514, requests per second: 103, mean latency: 284.9 ms
Requests: 1181, requests per second: 135, mean latency: 220.7 ms
Requests: 1185, requests per second: 136, mean latency: 219.5 ms
Requests: 1175, requests per second: 134, mean latency: 220.8 ms
Requests: 1188, requests per second: 136, mean latency: 219.9 ms
Requests: 1191, requests per second: 135, mean latency: 219.7 ms
Requests: 1866, requests per second: 137, mean latency: 220.8 ms
Requests: 1869, requests per second: 137, mean latency: 220.6 ms
Requests: 1861, requests per second: 137, mean latency: 220.3 ms
Requests: 1866, requests per second: 136, mean latency: 221.2 ms
Requests: 1873, requests per second: 137, mean latency: 220.6 ms

Target URL: https://utilities.up.railway.app/now
Max time (s): 20
Concurrent clients: 150
Running on cores: 5
Agent: keepalive

Completed requests: 12723
Total errors: 0
Total time: 20.018 s
Mean latency: 233.6 ms
Effective rps: 636
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
That doesn't really help me load test my own instance simulating my users. That endpoint is a very simple json response. Our page makes multiple database calls and loads dozens of images. Is there a better way to A. Test the server configuration will handle our concurrent users. Can be as high as 3000 - 5000 depending on events B. Simulate this to estimate monthly cost averages compaired to other platforms
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
Just curious if I am doing something wrong here. Seems odd that sst is able to cold start loads of lambda functions in half the time as a server
31 replies
RRailway
Created by oldbettie on 9/21/2024 in #✋|help
Concurrent connections time out above 20 connections
SST
loadtest -c 60 https://dev-app.website-name.com -t 20 -k
loadtest -c 60 https://dev-app.website-name.com -t 20 -k
Requests: 46, requests per second: 9, mean latency: 2612.5 ms
Requests: 48, requests per second: 10, mean latency: 2369.5 ms
Requests: 62, requests per second: 12, mean latency: 2366.1 ms
Requests: 47, requests per second: 9, mean latency: 2372 ms
Requests: 57, requests per second: 11, mean latency: 2400.6 ms
Requests: 173, requests per second: 22, mean latency: 3149.4 ms
Requests: 149, requests per second: 21, mean latency: 3531.3 ms
Requests: 156, requests per second: 22, mean latency: 3509 ms
Requests: 159, requests per second: 22, mean latency: 3373 ms
Requests: 159, requests per second: 20, mean latency: 3408.3 ms
Requests: 255, requests per second: 21, mean latency: 3011 ms
Requests: 282, requests per second: 22, mean latency: 2801.1 ms
Requests: 262, requests per second: 21, mean latency: 2884.1 ms
Requests: 263, requests per second: 21, mean latency: 2928.8 ms
Requests: 264, requests per second: 21, mean latency: 2864.8 ms
Requests: 357, requests per second: 19, mean latency: 3283.9 ms
Requests: 353, requests per second: 18, mean latency: 3294.9 ms

Target URL: https://dev-app.website-name.com
Max time (s): 20
Concurrent clients: 300
Running on cores: 5
Agent: keepalive

Completed requests: 1772
Total errors: 0
Total time: 20.011 s
Mean latency: 3072.6 ms
Effective rps: 89
Requests: 46, requests per second: 9, mean latency: 2612.5 ms
Requests: 48, requests per second: 10, mean latency: 2369.5 ms
Requests: 62, requests per second: 12, mean latency: 2366.1 ms
Requests: 47, requests per second: 9, mean latency: 2372 ms
Requests: 57, requests per second: 11, mean latency: 2400.6 ms
Requests: 173, requests per second: 22, mean latency: 3149.4 ms
Requests: 149, requests per second: 21, mean latency: 3531.3 ms
Requests: 156, requests per second: 22, mean latency: 3509 ms
Requests: 159, requests per second: 22, mean latency: 3373 ms
Requests: 159, requests per second: 20, mean latency: 3408.3 ms
Requests: 255, requests per second: 21, mean latency: 3011 ms
Requests: 282, requests per second: 22, mean latency: 2801.1 ms
Requests: 262, requests per second: 21, mean latency: 2884.1 ms
Requests: 263, requests per second: 21, mean latency: 2928.8 ms
Requests: 264, requests per second: 21, mean latency: 2864.8 ms
Requests: 357, requests per second: 19, mean latency: 3283.9 ms
Requests: 353, requests per second: 18, mean latency: 3294.9 ms

Target URL: https://dev-app.website-name.com
Max time (s): 20
Concurrent clients: 300
Running on cores: 5
Agent: keepalive

Completed requests: 1772
Total errors: 0
Total time: 20.011 s
Mean latency: 3072.6 ms
Effective rps: 89
SST at double the concurrent requests resolves in half the time and completes 1772 requests in the same amount of time
31 replies