R
Railway3mo ago
jeremy

Resize volume on hobby plan

Hi there, During the night my database reached the 5GB limit and now I am kinda stuck. I tried deploying the file browser template and mount the volume to it, but because it's full, the browser template couldn't be deployed. The database volume is connected to LibSQL and the size growing is due to WAL method. Usually, when I redeploy the libsql server it concat all the WAL and the real size of the volume is approximatively 600MB. However, since it reached the limit, I also can't redeploy the libsql server ("disk I/O error"), meaning it can't optimise the volume and go back to the "normal" size. I'm now trying to recreate a new volume and restore data from my backup so I can at least restart my services. Is there a way to grow the volume, either temporary (to 10GB for example) or just a paid add-on for hobby plan? (to 10GB as well) PS: How can I download all the files from the volume? Since the file browser doesn't work (error: "mkdir: can't create directory '/data/storage': No space left on device") id: 34849d22-d685-4e73-8858-fbd4fe42ea65 volume: db249e52-dbf7-43a5-963e-090ad7f2b034
Solution:
I can grow this to 10gb for you when I'm back at the computer. to download files you would need a service that doesn't write data to the volume on startup, filebrowser writes it disk based database to disk along with metadata...
Jump to solution
22 Replies
Percy
Percy3mo ago
Project ID: 34849d22-d685-4e73-8858-fbd4fe42ea65,db249e52-dbf7-43a5-963e-090ad7f2b034
Solution
Brody
Brody3mo ago
I can grow this to 10gb for you when I'm back at the computer. to download files you would need a service that doesn't write data to the volume on startup, filebrowser writes it disk based database to disk along with metadata
jeremy
jeremyOP3mo ago
Alright, that would be awesome, let me know when you are able to do it! Is there any existing template for that? Otherwise I will write something myself in case of emergency in the future like that
Brody
Brody3mo ago
nothing that I'm aware of to simply dump your volume in one go like filebrowser could done
jeremy
jeremyOP3mo ago
thank you, this is very much appreciated! And I'm writing a stupid simple template to do a dump of a volume
Brody
Brody3mo ago
like just dump a zip? it sure is a good thing we have volume alerting on the pro plan 😉
jeremy
jeremyOP3mo ago
yep, a zip Sure is, but tbh, an automatic scaling/resizing would be even better. Alerts can still be missed
Brody
Brody3mo ago
thats why you can set alerts for different thresholds 🙂
jeremy
jeremyOP3mo ago
I also need to look better into the sqld configuration, I might be able to reduce the frequency of WAL so it doesn't grow so fast
No description
Brody
Brody3mo ago
i mean, Pro allows you to grow your own volume to 250GB so take that how you want haha
jeremy
jeremyOP3mo ago
Ahah yeah, I wouldn't have to worry that much. I'm still following closely the changes on the Pro plan, with maybe included usage in the future. For the moment, for my side-project the hobby plan is enough
Brody
Brody3mo ago
sounds good to me, just a fair warning, your next volume increase will need to be done by upgrading to pro.
jeremy
jeremyOP3mo ago
Alright, that should do the job https://railway.app/template/EBwdAh
Brody
Brody3mo ago
are you down for some template feedback?
jeremy
jeremyOP3mo ago
Yeah sure, it’s the first one I wrote
Brody
Brody3mo ago
your code expects user to mount the volume to the correct location. if you give the user room to do something wrong, it's guaranteed that they will do something wrong. instead have the code use that railway volume mount path variable so that there is no opportunity for the user to do something wrong. another question, how fast can node even zip for example a 50gb directory?
jeremy
jeremyOP3mo ago
I actually started using a variable for the volume path but when mounting the volume to the service, you have to define the mounting path, so it’s still error prone. Unless you can define the volume path when mounting it to your service? Good question, it might not be the best. Do you have a way for me to try with some dummy data through Railway. I could write the zipping part in whatever is fastest
Brody
Brody3mo ago
if you use the variable, it doesn't matter where the user mounts the volume as for benchmarking node zipping a directory, it doesn't need to be railway specific, you can run the test with a 50gb directory on your own computer as long as you have an nvme drive
jeremy
jeremyOP3mo ago
Alright cool, I'll improve that, thanks 👍 alright, updated the template, rewrote in GO, perfs are quite good locally, ~1:30min for 40GB. Through Railway I only have 400MB of data to play with for testing, and that took 20sec. If you have any dummy big data to try it out, could be interesting. I have one issue to fix, where the memory stays very high after streaming the ZIP, will look at it later
Brody
Brody3mo ago
haha if i knew you where gonna rewrite it in go i would have given you some tips i know go's gzip package is single threaded, so im going to assume so is it's zip package, you could swap to using https://github.com/klauspost/pgzip
jeremy
jeremyOP3mo ago
I am using this package https://pkg.go.dev/github.com/klauspost/compress/zip from the same user, can give a try to the one you mentionned looks way faster with pgzip
Brody
Brody3mo ago
how much faster are we talking?
Want results from more Discord servers?
Add your server