What's the best way to shrink the world size?
In hindsight, I picked a server plan that doesn't have enough storage. As a result I am running out of storage. Does anyone have a suggestion for shrinking my world size that is safe and effective? I just want to mainly remove unused chunks, I'm willing to do it by hand if needed with a tool of some kind. Paper 1.21.1
18 Replies
Just set the world border smaller so that other players can't load more chunks or buy more storage
Yep
You can use a tool like mcaselector to find and remove unused chunks to reduce the space. you can especially use this to trim chunks outside your worldborder.
shrinking the worldborder will NOT automatically remove already generated areas outside your new worldborder.
^
Mcaselector is the way to go
i will do that, thanks guys
linear region file format
try linear paper
it compresses the world to like half the size without removing anything
Do know that linear world format loads chunks faster and has it's own downsides so please consider the downsides before switching. @Schroedes
I will do some research before implementing π
Im curious what the downsides are. Could you give a few examples?
Uses more RAM, and isn't quite as flushy in terms of chunk saving, since it has to write data region files at a time instead of chunks at a time
I'm kind of curious how efficient storage would have been if the linear format made use of training zstd's external dictionary for the region and still compressing each chunk independently instead of just compressing everything all as one stream.
It uses more RAM like arai said and the chunk loading is slower which if we are talking for a SMP type server it would be quite bad.
On paper (the phrase, not the server) at least it may actually work better for larger worlds since its going to need less iops to load regions, making it more suitable for HDD backed storage. Though as always, ymmv
Yeah but not a lot of people use HDD for their storage nowadays.
Which would mean SSDs should have no problem whatsoever keeping up Β―\_(γ)_/Β―
I have tried it myself and the chunk loading speed were slower compared to .mca
What were you testing it on? zstd is blisteringly fast to decompress, but only if you're not CPU starved, making this not necessairly well suited for a hosting service.
I can't recall how much CPU I had but it was on a public host yeah
The major cost of linear paper is that it's going to decompress all 1024 chunk columns every time you load a region, if you were to random teleport to coordinates (step through a nether portal, etc), that could mean loading up to 4 regions at once if you're at both the x and z border of regions. which would take it a moment.
Once that cost has been paid, you should have all chunk columns for the regions loaded in memory, hence no additional loading time spent.
While I'm defending zstd here, I'm actually not happy with the design decisions of the linear region file format; it's like they didn't even read the spec docs of zstd before they decided to use it.