Secondary HDD refusing to Mount, It also causes strange CPU Load
So I have just installed Bazzite on a new arrangement of hardware and it has been the best experience that I have had with Bazzite so far.
One issue I am having is that my secondary HDD which was selected during the initial Bazzite installation, and is formatted as BTRFS refuses to mount when I select it in the Dolphin File Manager and then enter the subsequent password in order to give it the necessary permissions to mount.
The little "Double Arrow" Icon to the right of the drives name spins repeatedly and at the same time the CPU Load on my system goes from being about 4% to being as high as 64%, and bare in mind that when doing this no other application other than "Cooler Control" is open, and I am using "Cooler Control" to monitor system load & CPU + GPU tells when attempting to access this HDD.
You can also audibly hear the CPU fan and system fans ramp up to meet the increased CPU heat output caused by the increased CPU Load.
Why is attempting to access this HDD causing the system to not do anything and the CPU to ramp up?
9 Replies
If you ticked both drives during install then I believe that the installer will do some form of software based RAID 0 setup. Never tried it but iirc some other help ticket where something weird like that happened.
Maybe open a terminal and run
cat /etc/fstab
to have a look at what your mount points look likeI'll give fstab a shot and see what it shows.
I wasn't aware that by default it setup drives in Raid 0, I must have glossed over that the first two times I followed the installation instructions, this time around I didn't follow the instructions until much later in the process so it is possible I missed mention of Raid 0 being a default config option when multiple drives are selected.
Given my preferences for data reliability Raid 0 is not something I would want anyway, so if fstab does suggest that Raid 0 is in use then I shall reinstall and make sure to avoid that bump in the road.
Well, the drive doesn't seem to have any fstab entries (which is really strange).
Because the system uses BTRFS everything is using UUID's instead of standard drive names such as /dev/sdXX or /dev/nvmeXXXXXX (This system doesn't support NVMe currently, I would need to install a PCI-E card to add support for it.)
So I don't actually know which one would be the combined Raid 0 volume.
All the relevant partitions appear, at least for running the OS, things like "/boot", "/", "/boot/efi", "/home", &, "/var".
But their is no sign of the size of these volumes either, and when using KDE Partition Manager the drives just appear as their Encrypted BTRFS volumes, again with no indication of the existence of any Raid configuration in the GUI.
Although because I have never and would never use Raid 0 in such a setup I don't actually know how KDE Partition Manager even displays a Raid 0 configuration if one is present.
I shall see what commands I can find Online for listing active Raid Volumes on the system.
I have confirmed their is no RAID volumes active on my system.
Given the age of the drive I am for now going to assume that the drive is dead, it is a 1TB Seagate drive that is around 15-Years old.
Their is a non-zero chance that during me moving the drive it has somehow experienced a big enough vibration to be damaged even just slightly but enough to cause it to have issues.
I am extremely careful with hardware most of the time, but I will admit that sometimes I may jolt a piece of hardware suddenly when trying to move cables and other things like that out of the way of whatever I am doing, and so that action could conceivably cause maybe a G-Force of Possibly as much as 5G-6G.
I am not sure how resilient these late-2010's HDD's are but if it has failed then at least that would answer why I am having issues.
I will report back once I have checked the drive in other OS's to see if the behaviour is consistent between OS's, including different LiveOS's.
You can use
sudo fdisk -x
to get the device names and uuid together. I'll need to test in a VM what happens if I have two drives and select auto configuration so I can check what actually happens rather than stuff I've read
Hopefully the drive isn't dead. Do you have an drive enclosure for it?No enclosure but I have got a powered USB 3.0 to SATA Adapter so hopefully if the drives okay that will show it as such if I do need to remove it from the system.
I'm trying my best to not need to remove it from the system at all if I can, instead I'll just wipe the Bazzite install since it's new and try out other Distributions of Linux to see if the issue persists across different Linux variations, if it does then I'll do a quick check in a live Windows PE Environment, and if it fails there too then it's Officially Done.
It's not a major loss, but I would rather not need to have to pay out for another drive if I can avoid needing to do so. I'm not made of money and this is meant to be a relatively low cost system made from odd parts that I already had + some cheap but necessary upgrades to core Components that simply can't be ignored (like the power supply).
Hope it's not a dead drive. Good luck! 🙂
Just tested in my VM. Made a Bazzite machine and selected two different drives, in the install media and it assigned both of them with the same UUID so they come up as the same drive. So your drive might be fine, you just have some form of software raid set up
Oh right, in that case why would it be that I cannot access it?
Does that mean in the Dolphin File Manager that the "/home" directory should conceivably be roughly 80GB (from my 120GB Boot SSD) + roughly 960GB (from my 1TB HDD).
I list less than the full sizes because parts of the drives are preallocated to stuff, or at least that is what Bazzite seems to think for them, 25GB of the HDD is used on.... Something.... Not entirely sure what, but if it is in a Raid 0 configuration then that might explain that 25GB being set aside as part of the Raid 0 setup allowing it to function as some sort of intra-drive cache between the SSD and the HDD.
My knowledge in the specifics of how exactly Raid functions and effects a individual drive is really limited when it comes to Raid 0 because I am pretty against Raid 0 and so never really use it, I am more of a Raid 1 person, redundancy over performance, although for this PC I was sort of hoping I could just have two drives and the redundancy would be something I would consider at a later date of the data on the system ever became important to me.
I back up important data all of the time, I had off-site critical data backups, no cloud backups though, my data isn't that important to me such that I need it back immediately if a drive dies or my system goes up in flames.
I still couldn't find mention of Raid 0 in the installation instructions when I looked esrlier, so if you're any good at making pull requests for the Wiki could I type out a dew suggestive lines to add to the Wiki and have you do a pull request, assuming you know how to use Git.
I don't know how to use Git, and I have on numerous occasions tried to learn in but frequently get brain blocked, I basically get overwhelmed by all the terms and phrases, all the buttons in the GitHub GUI, all the expected ways things should be submitted.
When I want to submit something I am doing it because it is on my mind and I think it would help others, I don't do any Git myself.
I'm trying to primarily understand, where is all my storage space LOL
Okay, I've been really stupid all this time.....
So you are right, it is a Raid 0 Volume, I honestly didn't know that Raid 0 was done by default so that has caught me by surprise to be totally honest.
I wasn't looking for it so I didn't see it, and being the stubborn person that I am I had assumed that the drive just wasn't mounted, because that was the case in other distributions that I had used.
Once again I didn't see anything in the installation guide about Raid 0 being used by Default, so that 100% needs to be mentioned and updated, I will try to see about getting a GitHub issue opened about that so it can be fixed.
Because I won't be the last person to come along wondering where my storage is and why I can't mount my drives after a fresh/recent Bazzite installation.
Thanks for you help @wolfyreload, Greatly appreciated!
Yeah I'm not a fan at all of Raid 0 either. My best friend bought a media server with a Raid 0 setup. One drive failed, all gone. Definately a good idea to add a warning of that in the docs to never select more than one drive.
I just tested it in a VM it split the install between the two disks and then I deleted one of them to see what would happen. And unsurprisingly the other component to the drive is broken too.

That is totally going to catch out unsuspecting users in the future when they decide to upgrade their existing HDD or SSD to a newer large one and find that theirs no simple 1-Click solution to do it.
They have to spend hours transferring data across drives when it should be a very simple 10-20 minute process removing the old drive physically, then physically inserting the new drive even for a newbie to Computers that task alone is already something most people would consider to be potentially stressful, adding stress is not what We (As a community)/Bazzite should be doing.
This feels like a huge misstep by the Bazzite Developer Team, I'll raise this in general discussion and see what they think of the suggestion, I certainly don't agree that Raid 0 is a good Default, not with the risks and complexities (for new & uneducated users) that it comes with.
The installer comes from Fedora which inherited it from Red Hat. So I suspect installing vanilla Fedora will have the same issue (at least I think it will). I haven't seen anything in the docs mentioning this so, it would be a good idea to mention it as you are the second person I've seen to encounter this.