Doubts about pricing
Hi, Im considering going for the Hobby or Pro plan for a express backend application I have. But I have doubts about what are going to be the pricing limits, its not clear to me how does it exactly works when it reads "pricing based on resources usage", for example, if I use the Hobby plan, is there a certain limit that its just not gonna go above or what can I do? Im scared of paying for the plan but receiving, idk, a charge of 500$ or something crazy because of "resource usage", do you know what I mean? Can anyone clarify me how it works? Also if it helps, I would at the very least need 8 GB of RAM and 1 CPU, if anything probably even more resources but that would be bare minimum to start with.
270 Replies
Project ID:
N/A
N/A
for starters, will your application actually be using 8GB 24/7?
Btw a good thing to mention is that I dont need database, I already have that somewhere else (im using mongoDB atlas)
Well, not 24/7 but... maybe about, idk, lets say 12 hours of the 24 hours, since will be many people using, like on average, maybe half the time yeah, maybe even more.
so then let's say an average of about 4GB ?
yeah
lets say that for now
and 1 CPU as well just to say something decent there
then that's $60 before you factor in egress and seat costs
so the resource usage is calculated with this, and only this? no "hidden surprises" then?
i mean that, and the plan, right?
the only hidden surprise would be not understanding how pricing works I guess
but yeah, that usage based pricing, and the seat cost for pro
right
About network usage, I expect each user will spend about 1 GB per day or so, and lets say I have like 500 users
unlike a VPS, on Railway you only pay for the resources you actually use, so if your app uses 0.1 vcpu and 500mb of memory, 99% of the time, then you are going to pay significantly less than $60
it's not about users, it's about how much YOUR app uses
ah right, nice
so you will be paying to talk to atlas
in the form of egress
about network usage, what if some hacker basically spamms my app as much as possible, can I somehow set up a hard limit for network? so that I dont get charged crazy
you can set a hard limit on spend, you can't set a hard limit on network usage though
yes I think mostly this will be, mongoDB atlas data fetching will be 90% of network activity, correct
so how do I do if theres some hacker actually?
you don't pay for ingress, so there's no cost in getting data from atlas, but you will pay to send data to atlas
ah... wait, ingress and egress
Its first time I hear about this
use Cloudflare and setup some rules so no bad traffic makes it to your app
so basically i only get charged for "sending" data to DB, not fetching data from DB?
cause if so its fine
cause im caching most of the data once its sent to DB once per hour
but they might request the DB data (already cached once per hour) like idk, 100 times per hour lets say idk, big ingress activity, that no extra charge you tell me, right?
basically, in reality fetching data from the database still has a little egress since you are still sending outgoing traffic to the database to tell it what data you want, but for all intensive purposes you can ignore that cost since it's little
correct, we do not charge for ingress
right, lets say I bring cached data from the DB, lets say its 500 mb of data. How much would it be approximately then, maybe like 1 MB of data egress there?
far less than 1MB, if your call to the database uses 1MB you're doing something really wrong lol
oh okey, so normal request less than 1 MB, nice. Idk im asking cause im new to hosting providers heavy deployments in general
well heavy in my terms, my app is heavier than any other i done before int erms of data processing etc
may I ask what tech stack?
yes π
express JS with mongoDB atlas (talking backend only)
frontend is react
but frontend will be somewhere else
here I wanna put backend
the most important is backend surely
what does the backend do to make you think it will be using on average 4GB?
well, it takes 3 million auction house objects with like 20 fields each of the objects. then does multiple filtering and stuff like that
virtual economy stuff, cryptocurrency games, an API for getting prices, its very heavy tho yeah you see, huge amount of data
maybe it wont average 4 GB idk
but Im expecting lets say 250 users a day on average
lets say they use the app on average for 8 hours
and each hour the data is cached so data processing wont be that crazy but still some custom filters will be applied and they will be fetching each hour like 500 MB of data to the frontend, thats assuming they dont reload the page, if they do a few times, then re-fetch the cached data, imagine.
something like that, what you think would be realistically good to go for?
and what expcted cost?
cache is definitely gonna reduce costs for sure
I'd say under a $100 but that's not a promise
ah right sounds ok
yeah sure thanks good cache exists man
otherwise i be selling my leg or something π
hahah
but I will say your usecase is pro, not hobby
right
pro sounds powerful it has up to 32 GB of RAM, and 32 CPUs, that sounds epic but, im scared of the cost
you can still set a $100 spend limit
I would rather limit the RAM to 12 or 16 GBs to make sure it doesnt get bruteforced by some guy spamming the page on 5 different devices or some sht xD
you can do that too
and probably 4 CPUs is more than enough i think
just know that your app will crash when it reaches the memory limit you set
yeah thats fine, i mean rather that than 500$ bill
well a $500 bill would be impossible given you will set a spend limit
how much could an app cost with these settings: 16 GB of RAM, 4 CPUs, and if total goes over 100$ per month (whatever it is, either RAM, CPUs, network, etc.) just automatically close the server, like, crash it. will that even work?
not sure of the CPUs tho, how powerful are they? theres many typesof CPUs nowadays
well over $200, but if your application is using that much then hopefully with that amount of traffic it equates to a good amount of revenue haha.
if you hit a spend limit your application will be shut down.
they are xeon cpus
right π i mean long story short this works on % of users which makes sense
what xeon cpus may I ask? i mean what model so I can look up the specs
I don't know off the top of my head, sorry
ah
but do you have maybe some reference with like the year they got released or the Ghz they are using... i mean how fast/slow they are, some sort of reference
I know they are Cascade Lake @2.8GHz
oh okey
so anyways what happens if I start little?
lets say I start with hobby plan so its max 8 GB of RAM and 8 CPU
can I then if in the future I get more people, just click on pro and it changes plan instantly or is it complicated to change plans?
if I move on to pro I would still be limiting 100% to like, max 16 GB RAM and probably still 8 CPUs or something
yeah you buy pro and then it will ask you what projects you want to put on the pro plan
and then it will re-launch my hobby plan project with pro instead and thats it?
i mean, that simple is it?
you will need to do the relaunch yourself but yes
and youll need to set the spend limits up on your pro workspace again, since the personal hobby plan and pro plans are treated as separate workspaces under the same account
ah nice
and btw out of curiosity, all this was me asking about backend but do you ever deploy here like, frontend apps, or nah?
cause im wondering about the cost of deploying the frontend. the frontend codebase isnt too big its kinda small, its mostly a big datatable with pagination that will fetch like, 500 MB of data and user can play around with the data, filter it locally, etc.
should be cheaper than the backend, right?
oh yes we deploy frontends all the time on railway!
our docs site is frontend hosted on railway! - https://docs.railway.app/
and we even have a guide for hosting a react frontend app on railway! - https://docs.railway.app/guides/react
but is much cheaper than backend right?
if you have cloudflare in front for cache yes
never used cloudflare, lets assume i dont use it
i mean frontend is chill is just showing DB data for the most part, a few register and login simple stuff also but thats it
its like, 5 page app
then you only pay for the tiny amount of cpu and mem, like 0.1 vcpu and 40mb of mem (if done right), and the egress it takes to send the web page, and page assets
ah great
okey so I think im decided: first thing I wanna do is deploy my backend from my github repository on the hobby plan, to start with that π
@Brody does this work as a limit?
I think I could start with 10 USD just to test around
yes it would act as a limit, but i highly recommend you sign up for the usage based plan
but why tho if I just wanna like, start doing some tests
counter question, why do you think prepaid is better for that?
cause nobody can hack the usage further
i think ive mentioned several times in this thread that you can set a spend limit
ah sorry, you meant with the other subscription as well? okey
makes sense then to go for the other
prepaid is not the way to go, its just headaches
done π
how do I change name here to a more fitting name?
in the project settings
okey nice
so I try import from github repo and it throws me this error:
is the repo name longer than 32 chars?
then you will need to create an empty service and then add the repo
ah, okey
click the + create button
then you can add the repo in the service settings
okey π nice
i also added the env variables now
deploy command I see here
should be npm install right? its an express app
no build command or anything like that in express apps
@Brody
nixpacks will set a install command for you, you shouldnt use npm install
ah okey nice
i clicked on deploy
as i was waiting
it says deployment successful π
pog
π
but idk what to do now idk whats my backend url
go to the service settings and get one
on generate domain?
ooh, here I can limit the stuff yes?
yep, but remember what i said, if your app ends up reaching those limits its just gonna crash loop
I usually run on port 3500 on localhost, not sureif here will cause trouble:
yeah
okey look
as long as you point the domain to the correct port youre good
I made 1 hit to my super heavy endpoint
it took 2 minutes and 16 seconds which is not bad for a start
i will try hit it again, now it will use cached data
okay now try on our metal hardware
no no but wait lets go slow step by step
haha
this is unironically good btw
yeah
in my PC it takes like 1 min 20 seconds
but my PC is my PC, my PC is powerful anyways
so this is like positive
let me see how it does with cache now
cache might take similar time anyways but still, just to make sure it works with that too
yeah thats why i wanted you to try on metal, the metal region has a better cpu than what you are currently running on
we will get there, but step by step, we go slow, i like to test cheapest then scale progressively, lets go slow.
its the same price
okey cache took 1 min 50 s, thats not bad
oh its same price?
yeah of course
wait, better CPU but same price? whats the catch?
you cant use volumes on it yet
what are volumes?
so it reads: "Volumes allow you to store persistent data for services on Railway", but I already have my mongoDB for cache, rigth?
yeah but volumes are for file storage
not applicable to you, but you asked so i answered
ah
anyways about the better CPU we were talking, how can I get better CPU for same price sorry?
switch to a region marked metal in your service settings
oh I see
i change and redeploy, lets see, so metal always better, I always choose metal yes? π
it should be better, those machines use much newer CPUs
nice π
okey lets give it a moment its deploying
wait, i even have 1 doubt
how does railway know that my express file is server.js
if everyone uses app.js
i didnt mention anything but it works
i assume you have a start script in your package.json?
i have one called "server"
"server": "node server.js"
magic
what is it xD
hahah
some AI im guessing that tries to figure out the command based on what it sees
who knows
its just detected a server.js file and ran it
oh right
we dont use ai
nice
some people use it too much too xd
anyways this is deployed
ima try to hit endpoint, its gonna use cached data lets see time
previous was 1 m 50 s
i assume an in memory cache? if so, that cache is gone since you redeployed
no no
i meant my own mongoDB cache
like its not refetching all the data from some API as the first time, it just fetches from mongoDB, it still have to do some processing of the data so its normal it takes long time
i would highly recommend you deploy redis on railway and use that as cache, since right now you will pay for the cache if its in atlas
okey now it took 1 minute 38 seconds
no i dont pay any cache to atlas
im using gridFS
with free tier, and it works
egress, remember
ah right
sorry
but i.. i never used redis before
time to learn then
not the time, its time to deploy this fast as possible and start testing, customer wants to start soon
so in total i made 3 endpoints hits
can I see a chart of the hardware to see how much RAM / CPUs were used?
look at the metrics tab
like is there a way to see somehwere how much it took
ah okey
so, how bad is it? mmm, i mean 2 GB of RAM, thats quite good i think, not using that much
and 2 vCPU, not that bad either
i think reason takes so much its mongoDB gridFS, i need to improve that myself
but we can see that server is doing good, is not exploding or anything
what if 10 people request at the same time, will server crash or just go slower a bit with all the requests?
btw this means like, 240mb of egress right? approximately? which is saving in further requests, right?
yeah
not doing cache in atlas will surely improve that
yes but for now i plan to continue doing cache in atlas, forget changing it
and 240 mb was without the cache, first time i removed cache intentionally
from my mongoDB database
so further requests not doing much egress, which is like, good, right? saving costs yes?
just to make sure I undrstand whats happening, im a bit slow and new to this sorry π
sounds about right
oh nice π
so lets analyse, this cache stays for 1 hour since the time its cached lets say
so about 24 caches per day approximately
and then this for 30 days
so its about 240 mb call x 24 times a day x 30 days a month
(cache gets shared across all users of the app)
172 gb per month
thats like, 17.2$ right?
then obviously login, register etc would add a bit more but basically it would be this
retrieving cache would be free, since thats ingress
ah right
still 17$ usd per month due to the 240 mb non-cached-yet call every hour
which is i would say, quite reasonable
right?
id say so
perfect!
so just to make sure this is clear, if 10 users use the endpoint at the same time, obviously its going to go much slower for a bit but it should in theory not crash the server, right? @Brody
it definitely could crash, this is node after all, not typically known for high throughput with compute
under high load you will want to use replicas, aka horizontal scaling
god knows whats that man π hahah
i mean the replicas thing
haha
so: "Scale horizontally by manually increasing the number of replicas for a service in the service settings. Increasing the number of replicas on a service will create multiple instances of the service deployment.", does this means I have to use different domains for each replica?
or they work under the scenes but at the same domain?
read a little more and your question will be answered
oh nice
so hows the cost of adding a replica @Brody ? just as if i was adding another exact same server in terms of pricing, or how it works?
correct, its just another instance of your app
so I have this doubt
I have a customer, but Im the one with the github repo
my customer is supposed to put the money in railway for the servers
is it possible to do it so that i keep my github repo and he pays the servers?
how would we do it? @Brody
yeah you'd just need their card details and you would enter them when signing up for pro
but thats dangerous for him π¦ no way right?
i mean no way he gives me his card details, there has to be another way
then you pay and send them invoices
mm... no, for some private reasons I cant do that
so basically he would register
on railway
and he puts his card on his account, that would work for me
it needs to be done on the account you already have
but then how can he connect the github repo? can i keep the github repo on my github account and then give him like, permissions on his github accountΒΏ?
im assuming he creates an account for him, wont be using my railway account im just testing stuff with hobby plan but in future we deploy with his acc
you can only connect one GitHub account to one Railway account
so theres no way on github to give access to somebody else like a github team project or something? idk im not expert at github either
I'm not either
i guess i could create new github account for him and put my code on a repo for him and he can use that
right
you would need to push code to that repo and be a part of their team, meaning that's 2 seat costs
no, we can use both the same github account
respectfully this is over complicating it, pay it and send them the invoice, that is how most people do it in your situation
I cant, its taxes related stuff
you definitely can
I assume it's just that you don't want to set it up
im not a legal registered business anyways
i cant emit invoices legally here
is not an option
you dont know what country i am from
laws different
Is it hard for you to do that? where I live it's $70 and a sheet of paper
you probably live
in a normal country
i live in degenerate socialist shithole
so as of now i rather choose this, otherwise here its 300$ monthly taz
tax
+ 35-40%
fair enough
but base tax 300$
yeah
we will just do 1 github account im okey terms with him, we friends π
so i think that will work that way
π
@Brody and anyways thank you for all the support today I will try ask you around here anytime I have more doubts
I will now give you my most beloved gift
a fat chinchilla pic
my favourite floof
π
lmfao thank you very much, very nice picture
π yes i love them, so fluffy and big, they friendly and rarely bite unless quite stressed
π
have a nice day!
@Brody Hi again, I was investigating this Redis thing, I got as far as getting the code with code that should hopefully work, but this throws me an error of cant connect to client:
How do you usually install redis anyways and, is it complicated to do on railway?
Im just using a windows 11 machine, nothing special
just click that same create menu, choose database and then choose redis
and then have your code connect to it via environment variables
ah, and btw I found theres an installer for windows, its the first time I ever use this so no idea what im doing but hopefully works, I will try locally first then try railway
got it running now it seems... kinda
kinda?
@Brody Hi, so Im learning Redis, and I gt it working
im running it locally correctly:
however, I have absolutely no idea on how to exactly set it up on railway
I see you wrote me before about "have your code connect to it with environment variables" but so far I didnt need any environment variable locally so im not sure where it would go
in my express backend app the initial code is like this for reference:
Heya ! the redis client will use the defaults to connect to a redis server. If you're running a simple redis server locally, the server likely uses all the defaults, therefore it's kinda magic for you.
On railway, you would setup a redis service, then in your app's service, you would add a
REDIS_URL
env variable, and have it references ${{Redis.REDIS_URL}}
You're also likely going to need to edit your code to use your env: createClient()
becomes createClient(process.env.REDIS_URL)
. to avoid null issues, you can also do const redisClient = process.env.REDIS_URL ? createClient(process.env.REDIS_URL) : createClient()
You can learn more about reference variables here: https://docs.railway.app/guides/variables#reference-variables :)
Does this cost any money nowadays?
yes, Railway is a paid platform haha
No, I mean the Redis additional cache thing
Im asking because if its gonna cost money might as well use gridFS from mongoDB then
yes, Railway is paid
whats the cost of Redis
it all depends on the resource usage
then it should save money if anything, since Redis is supposed to save resource usage, right?
meaning: does it cost much resources to even create a Redis service?
no it doesn't cost much to have a idling redis database, and not having to leave the private network would be ideal.
the most ideal situation would be to use a railway hosted mongo database.
brother, you told me yesterday about "you should learn redis", now you are telling me i "better use a mongodb database" which is what I was already using in the first place? whats going on?
what the funk
you should do both, specifically both on railway, not atlas
pf
Anyways, the other guy told me this: On railway, you would setup a redis service, then in your app's service, you would add a REDIS_URL env variable, and have it references ${{Redis.REDIS_URL}} in here
was he refering to process.env.REDIS_URL? whats that Redis.REDIS_URL syntax? idk
ah nvm it explains in the link it seems
do you know about deploying the redis? im supposed to get some url but, no idea which one it is, is it this one here?
i think nico explained it quite well
the main doubt I have is this: ${{Redis.REDIS_URL}}, what "Redis" here exactly, and where is that supposed to go exactly?
per nico's explanation -
in your app's service
as of now heres what I have:
okey ima go to my app service then
like that I think?
nico said
REDIS_URL
ah okey I will change it then
maybe it would be beneficial to read his message again?
i think i got it, its deploying now so will take a bit
let see your code changes?
i wonder if since its 2 different services, its considered egress when fetching from the redis?
yeah
no you do not pay egress for redis since you are connecting to it via the private network
hence why i also suggested mongo in railway
for now i will skip that, but i will keep redis here on railway
is normal that its taking 8 min to build btw?
no
it gave me this error:
Redis Client Error Error: connect ECONNREFUSED ::1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 6379
}
are you using ioredis?
and it keeps giving me the error seems like a loop
im using this:
npm redis package
what is this createClient function?
you have some syntax wrong and your code is trying to connect to a local database instead of the hosted database
its from the redis package
ah nico gave you the syntax for ioredis, this is the syntax for redis -
ooh
that example there on the url: is the REDIS_URL env variable reference, yes?
yes
its looking dramatic over here damn
ah nvm, that was previous deploy i think
think seems ok now
awsome!
now store cache in redis, not atlas
yeah, im using redis as of right now for cache, not gridFS like before for caches. as of now
and yeah I tried now with postman and it worked
β
sweet
Im now thinking of deploying the frontend, completely different thing this time
what kind of frontend?
react
its a react app, like vite react
okey ima check that
@Brody by any chance do you know the exact command that needs to be used to start the production server on react with vite? Im not seeing it on the guide and locally I run usually "npm run dev" but thats not the one, I am doing "npm run build" but not sure how to actually run it
the guide does cover that, its defined in the nixpacks.toml file and you should not be setting it yourself
ooh, okey
hold on, before I continue, I do have a bought domain on NameCheap, is it possible to use that domain with Railway for the frontend?
yes
perfect, how would I do that? π
I got this thing:
not sure if thats a breaking issue with the Caddyfile, also I try run the command right there on VSCode on another terminal, the "caddy fmt --overwrite" and it says its not recognised, idk
oh it seems fine nvm
Hi
@Brody @Nico Hi, I have this issue, I already uploaded to Pro and when I do 1 request to my flipping tool endpoint to the frontend (and backend as well, both hosted here), it doesnt crash. But when I do 2 requests together, It wont work and it shows me this error on the frontend:
No way its crashing on 32 GB of RAM and 32 CPU right? my endpoint is heavy to load but not that much, when I try with 1 it usually shows about 2 GB of RAM per 1 call, 2 CPU per 1 call, so it should not crash with 2 requests at the same time
nvm I updated some node parameter and i think it works
I started using this parameter: --stack-size=8192 and it worked
node is single threaded so there is a limit to what you can do with a single instance, you may have fixed it now but you'll likely need to add some replicas at some point
@Brody And btw how does replicas work in terms of pricing, If I have 1 service with 1 added replica its gonna be max hardware cost of my plan (pro at the moment), so 32 GB and 32 CPUs multiplied by 2, 64 GB and 64 CPUs? how does that work exactly?
you are only charged for the resources you actually use, and having replicas does not change that fact.
oh nice, so basically replicas just share resources of the same service, right? a service with the pro plan lets say, with 20 replicas, it can still never go above 32 GB of RAM and 32 CPU, yes?
no, resources are per replica
ah
right β
@Brody Hi, by any chance is there anything faster than redis available on railway, that does the same thing?
well first, why do you need a faster redis?
my backend not going as fast as I would like, is not bad, but, just wondering if there was anything. It still improved with redis vs mongoDB or mongoDB gridFS so not complaining but, yeah
do you have actual metrics to prove that redis is the bottleneck?
oh no, not really, but since I moved to gridFS and redis and it improved, was hoping there was something even better
theres few bottlenecks but some of them are basically hard to dodge and dont think I can do much better, im a junior-mid fullstack guy anyways and this thing im doing is already quite complex, is 800 lines of code for an endpoint hit and except few console.logs I have everything is needed
then im confident in saying that redis is not the bottleneck
you are confident but i cant show you my code and i cant figure out how to improve it much, thats why I asked, but yeah
I dont even know if theres a way to not have a bottleneck when I have to process literally 3,2 millions of objects with each object having like 30 fields, is crazy, this is the highest amount of data I ever had to manage
write in a more performant language
thats a reasonable idea, but i dont know any other than nodejs tbh on a decent enough level
have you ever done node js before? if so, can you recommend me a similar and hopefully not too hard language to learn thats still performantΒΏ?
learning time!
golang
yeah seems like, for now i will try to make this decent enough just so i can launch with the customer but, pretty much this seems like that
golang, okey
Im seeing stuff about Rust too, it seems even faster apparently, idk
it is, but you will not have a fun time going from javascript to rust, this isnt me gatekeeping anything, even experienced rust devs will tell you javascript to golang is a far easier path
oh, I see
@Brody Hi, is Redis stack available in Railway? I was doing some stuff and apparently for more advanced queries and similar stuff a Redis stack its needed, heres an example of what I mean: https://stackoverflow.com/questions/76837500/unknown-command-ft-create-when-creating-an-index-for-redis
Stack Overflow
Unknown Command FT.CREATE when creating an index for Redis
I am currently following: https://redis.io/docs/clients/python/ and in the "Indexing and querying JSON documents" section, I am trying to test out the example code on my computer but I ru...
Install Redis Stack
Install Redis Stack on Linux, macOS, and Windows
@Nico
Hi!
Just pitching in as a community member. They do suuport custom dockerfiles and you can absolutely deploy your own redis stack (via github repository for example) image if there is no available template.
Out of the box, I think Railway will only offer core redis.
I would only recommend doing this if you have some previous experimentation experience with docker.
This example is a python flask app but the spirit carries over π
https://docs.railway.app/guides/flask#use-a-dockerfile
ah okey thanks
@Brody Please help me with an issue with Redis network egress
please do not tag the team
ah sorry