Alaanor
Explore posts from serversReading ~30gb sequentially from the volume make the ram go to ~30gb
at least the UI said I was on it, I remember you said that it might be a lie from the UI because it would not works with volume, but now we got volume on v2 so I though I could maybe trust that UI
97 replies
Reading ~30gb sequentially from the volume make the ram go to ~30gb
@Brody @Angelo I saw v2 volume are now a thing and so I spent the day to setup the stuff and try on railway again but unfortunately it's still stuck this way. Not to complain or anything, just wanted to share that it did not magically fix that, as we though it perhaps would.
97 replies
Reading ~30gb sequentially from the volume make the ram go to ~30gb
This is really cool, appreciate the finding a lot. Thanks 👍 I'll be checking the railway changelog for v2 with volume frequently and hopefully one day I can be fully back on railway :)
97 replies
Reading ~30gb sequentially from the volume make the ram go to ~30gb
Since I could not find a solution with railway for this particular thing, I bought a server somewhere else, although the disk io isn't as good as railway and this add me some complexity for deployment and monitoring :( But yeah I can't afford adding 200$+ to my monthly billing just to read a few files sometimes. I still use railway for a lot of other stuff and I'm happy with those. I just figured out that I should railway where it is helping me instead of trying to fight it. No hate, I can understand why buff/cached is counted. Just wanted to give an update for future people searching this thread.
97 replies
Reading ~30gb sequentially from the volume make the ram go to ~30gb
I had some hope for a moment with https://linux.die.net/man/2/open O_DIRECT, it works locally, doesn't bump the buff/cache but on railway I get an error, I guess the filesystem doesn't accept this custom flag :sossa:
97 replies
Reading ~30gb sequentially from the volume make the ram go to ~30gb
I tried all I could think of really 😕 even with an explicit
drop(variable_with_deserialized_data)
at the end of each loop. Even after my job is done and clean it doesn't drop. I still want to precise that locally the same job never goes above a few mb of ram, on the same dataset.97 replies