NVMe M.2 drive not consistently mounting on startup

So I've got 6 drives plugged into my unholy mess of a machine. 3 standard HDD's of varying sizes, 2 NVMe M.2 SSD's (one boot/root, the other just storage), and a 2.5" SSD. During my initial setup, I used the Partition Manager to set the mount point for each of the non-root drives under /var/mnt/, the same way I did in Nobara a few months ago (though on non-atomic I believe I just did directly under /mnt/). For whatever reason, the storage M.2 drive occasionally fails to mount on start up (I'd say about 50/50). It's frustrating for Steam as that's where most of my games are, and mounting while the computer is already started, all the games seem to need to re-do their vulkan shaders for whatever reason. I'm hesitant to point to a hardware issue at this point as it was working flawlessly before I installed Bazzite, and I am able to immediately mount it no issues when it does not auto mount. The only changes between Nobara and switching to Bazzite is that I installed and freshly formatted a new M.2 NVMe in the old Nobara's drives slot (which is currently sitting on my desk), the drive in question was never touched. Is there a better way I should be setting the mount points, or what additional info is needed to help me troubleshoot this? MOBO is also less then 4 months old.
Solution:
thats your problem, you are not mounting using uuid
Jump to solution
8 Replies
asen23
asen236d ago
post content of /etc/fstab
Buoll
BuollOP6d ago
Buoll
BuollOP6d ago
Interesting, the drive in question is 'WD_Black_M2_2TB' is mounted twice? Under two different partitions? df -h for giggles:
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p3 3.7T 86G 3.6T 3% /sysroot
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 16G 208M 16G 2% /dev/shm
efivarfs 128K 16K 108K 13% /sys/firmware/efi/efivars
tmpfs 6.2G 2.7M 6.2G 1% /run
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-network-generator.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-udev-load-credentials.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup-dev-early.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-sysctl.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup-dev.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-vconsole-setup.service
tmpfs 16G 15M 16G 1% /tmp
/dev/nvme0n1p3 3.7T 86G 3.6T 3% /var
/dev/nvme0n1p2 974M 425M 482M 47% /boot
/dev/nvme0n1p3 3.7T 86G 3.6T 3% /var/home
overlay 3.7T 86G 3.6T 3% /usr/share/sddm/themes
/dev/nvme0n1p1 599M 13M 587M 3% /boot/efi
/dev/sdd1 120G 12G 106G 11% /var/mnt/850_Pro_128GB
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup.service
/dev/nvme1n1p1 1.9T 1.3T 579G 69% /var/mnt/WD_Black_M2_2TB
/dev/sda1 932G 351G 581G 38% /var/mnt/WD_Blue_1TB
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-resolved.service
/dev/sdb1 5.5T 145G 5.4T 3% /var/mnt/WD_Black_5TB
/dev/sdc1 2.8T 464G 2.3T 17% /var/mnt/WD_Black_3TB
tmpfs 3.1G 4.8M 3.1G 1% /run/user/1000
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p3 3.7T 86G 3.6T 3% /sysroot
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 16G 208M 16G 2% /dev/shm
efivarfs 128K 16K 108K 13% /sys/firmware/efi/efivars
tmpfs 6.2G 2.7M 6.2G 1% /run
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-network-generator.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-udev-load-credentials.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup-dev-early.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-sysctl.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup-dev.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-vconsole-setup.service
tmpfs 16G 15M 16G 1% /tmp
/dev/nvme0n1p3 3.7T 86G 3.6T 3% /var
/dev/nvme0n1p2 974M 425M 482M 47% /boot
/dev/nvme0n1p3 3.7T 86G 3.6T 3% /var/home
overlay 3.7T 86G 3.6T 3% /usr/share/sddm/themes
/dev/nvme0n1p1 599M 13M 587M 3% /boot/efi
/dev/sdd1 120G 12G 106G 11% /var/mnt/850_Pro_128GB
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup.service
/dev/nvme1n1p1 1.9T 1.3T 579G 69% /var/mnt/WD_Black_M2_2TB
/dev/sda1 932G 351G 581G 38% /var/mnt/WD_Blue_1TB
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-resolved.service
/dev/sdb1 5.5T 145G 5.4T 3% /var/mnt/WD_Black_5TB
/dev/sdc1 2.8T 464G 2.3T 17% /var/mnt/WD_Black_3TB
tmpfs 3.1G 4.8M 3.1G 1% /run/user/1000
Solution
asen23
asen236d ago
thats your problem, you are not mounting using uuid
asen23
asen236d ago
Auto-Mounting Secondary Drives - Bazzite Documentation
Bazzite is a custom image built upon Fedora Atomic Desktops that brings the best of Linux gaming to all of your devices.
Buoll
BuollOP6d ago
UUID=5050f061-b437-4c7e-a051-1fb47379e5f5 / btrfs subvol=root,noatime,lazytime,commit=120,discard=async,compress-force=zstd:1,space_cache=v2 0 0
UUID=952a9317-4c9a-4def-ac49-4a4e22660b2e /boot ext4 defaults 1 2
UUID=1D5C-451F /boot/efi vfat umask=0077,shortname=winnt 0 2
UUID=5050f061-b437-4c7e-a051-1fb47379e5f5 /home btrfs subvol=home,noatime,lazytime,commit=120,discard=async,compress-force=zstd:1,space_cache=v2 0 0
UUID=5050f061-b437-4c7e-a051-1fb47379e5f5 /var btrfs subvol=var,noatime,lazytime,commit=120,discard=async,compress-force=zstd:1,space_cache=v2 0 0
UUID=1d5c70d4-f33f-4106-ace1-463628fbf59b /var/mnt/WD_Black_M2_2TB btrfs nofail 0 0
UUID=56be7a7a-4a23-493d-9e8a-3a975d08bb57 /var/mnt/WD_Blue_1TB btrfs nofail 0 0
UUID=eec8e669-c499-4cfc-8e64-b8ae64222c76 /var/mnt/WD_Black_5TB btrfs nofail 0 0
UUID=983d558d-8646-42a5-ab39-d3f9c2bb5429 /var/mnt/WD_Black_3TB btrfs nofail 0 0
UUID=bcf81af9-dbf0-46b8-9ffb-5eaecd769550 /var/mnt/850_Pro_128GB btrfs nofail 0 0
/dev/nvme0n1p1 /var/mnt/WD_Black_M2_2TB btrfs nofail 0 0
UUID=5050f061-b437-4c7e-a051-1fb47379e5f5 / btrfs subvol=root,noatime,lazytime,commit=120,discard=async,compress-force=zstd:1,space_cache=v2 0 0
UUID=952a9317-4c9a-4def-ac49-4a4e22660b2e /boot ext4 defaults 1 2
UUID=1D5C-451F /boot/efi vfat umask=0077,shortname=winnt 0 2
UUID=5050f061-b437-4c7e-a051-1fb47379e5f5 /home btrfs subvol=home,noatime,lazytime,commit=120,discard=async,compress-force=zstd:1,space_cache=v2 0 0
UUID=5050f061-b437-4c7e-a051-1fb47379e5f5 /var btrfs subvol=var,noatime,lazytime,commit=120,discard=async,compress-force=zstd:1,space_cache=v2 0 0
UUID=1d5c70d4-f33f-4106-ace1-463628fbf59b /var/mnt/WD_Black_M2_2TB btrfs nofail 0 0
UUID=56be7a7a-4a23-493d-9e8a-3a975d08bb57 /var/mnt/WD_Blue_1TB btrfs nofail 0 0
UUID=eec8e669-c499-4cfc-8e64-b8ae64222c76 /var/mnt/WD_Black_5TB btrfs nofail 0 0
UUID=983d558d-8646-42a5-ab39-d3f9c2bb5429 /var/mnt/WD_Black_3TB btrfs nofail 0 0
UUID=bcf81af9-dbf0-46b8-9ffb-5eaecd769550 /var/mnt/850_Pro_128GB btrfs nofail 0 0
/dev/nvme0n1p1 /var/mnt/WD_Black_M2_2TB btrfs nofail 0 0
So I've got them all mounted as UUID, that last entry makes me nervous. I was going to manually delete it but I don't want to brick anything
asen23
asen236d ago
Deleting extra mount shouldn't brick anything Just don't touch the first 5
Buoll
BuollOP6d ago
Thanks for your help! Marked as solved

Did you find this page helpful?