What determines Disk Usage for a MySQL container?
My database is only in the region of 200MB yet the storage container on railway is increase by GBs per day and nearing the 5GB level.
As I am sure my data is not taking up near the 5GB is there a way to "recover" this space? or what is it being used for?
id : 6a6c4123-f4c9-4c41-ba5f-c3115ddc1753
15 Replies
Project ID:
6a6c4123-f4c9-4c41-ba5f-c3115ddc1753
that's mysql for you, hogs disk and memory
I don't mean to turn you away, but a question about mysql's disk usage wouldn't be a railway question, maybe ask in a mysql forum?
Appreciate the response. The mirror i have off site doesn't come close to the usage though.
Will investigate further.
For reference the offline mirror is 130MB.
If I fetch a backup (which because of railway previously not allowing mysqldump access) via script - i get 130MB.
The output from on my own server shows 130MB.
I just cannot fathom how the volume with the MySQL install from railway is up near 4GB.
I have scripted and run an optimize on all tables and it does not reclaim any disk space at all.
I really think the internal railway deployment for MySQL is bugged somehow.
Direct id of mysql (01c3efa8-fd42-494d-944a-72f8cf1e82f1)
Perhaps the default mysql setup is not quite correct?
It does seem to be something badly wrong with the docker "Service" from railway.
I am not even sure I can get access to the "service" or the storage to diagnose further.
it's just a mysql docker image, nothing fancy is going on
if there's an issue with disk usage, it's not due to railway, railway is just simply running the image and nothing much more
As above,
The local dev server (not a mysql docker) shows the DB as 130MB on disk. (71mb in mysql)
The backup downloaded from the application (because old mysql "plugins" couldn't allow mysqldump) downloads 130MB files.
Running
Via direct query to the MySQL railway docker yields
With all due respect, the reported "disk usage" of the docker is way off.
I don't have any access or control over the docker - I just deployed it from railway.
Perhaps it isn't even a docker issue but an issue with the railway end of calculating what storage is used.
Either way, my DB is at most 130MB and I am days away from hitting a 5GB limit.
If there was anything more I could do to "fix" the focker or the railway accounting end I would, and I've also checked MySQL forums for similar problems and not turned up much.
if railway was miss-calculating disk usage by that much a hell of a lot more people would be reporting it.
the size of the data in the database and the disk usage are vastly different things.
Well, things like egress went awry previously so it isn't out of the realms of possibility.
And I have already clarified that I understand the differences in disk and data in database issue and further clarified the numbers and why the problem appears to be with something out of my control. To recap:
Both MySQL on my server and and Railway docker report similar data sizes.
I have pasted the du output of my server with the identical database - 130MB
The backup file I generate from the MySQL railway docker is approximately the same size.
The only thing which is reporting > 4GB of disk usage is the railway monitoring of the storage volume of the Railway supplied MySQL docker.
Which by coincidence is the only blackbox in this entire scenario.
Infact isn't the way it works that the MySQL docker runs MySQL but the storage is mounted into that?
So the persistent storage from Railway is what is being used by MySQL to store files, but is outside the docker. (So the docker may not actually be at fault)
If this is the case, the problem is likely somewhere in the chain between being mounted into the docker, or the monitoring of the usage of this mount. I'm not sure what I can do to fix either having no real control over these things.
@Christian - would you mind looking behind the scenes at the mounts usage so we can dispell any guess work here?
Prompted by a request from
@gazhay
earlier, we have an open ticket to investigate this further after the holidays. I've added this thread to that ticket. In the meantime, I've offered to increase the Volume size from 5GB to 20GB. I wonder if the size here is related to MySQL binary logging or something, but will let the team investigate or elaborate in more detailperfect, thanks!
Thank you Christian.
Just fyi I did add the disable-log-bin cli to the launch command and restarted the mysql service earlier but it may be that doesn't delete older logs?
Further to this
Ok, some progress:
I re-enabled binary logs by removing the parameter from the start command - I had added this yesterday.
I then ran the raw SQL .
Finally, I re-added the disable-log-bin flag and redeployed the container.
Disk space has dramatically reduced - I still think it is far too high - but I will repeat the process above and remove all binary logs to see if that clears more.
It would seem that the default options of the docker - namely enabling Binary Logs is a not a good choice for most - if 71MB of data (130MB on disk on other deployments) is over 4GB.
Perhaps a note could be added to the service deployment page that binary logs are on and will fill the disk unless explicitly purged by the user.
I think this would happen regardless of the disk limit.
Thank you for the update and glad that this seems to explain it. I'll bring the feedback back to the team regarding adding a note or considering other options to help keep the MySQL size down in most cases
just closing the loop - after a little research, we've decided to add the
--disable-log-bin
to the start command of the default mySql template. i've also updated the readme in the template to reflect this. thanks y'all for raising this!!