GitHub - graybush/testFromContainer
i am having some nfs trouble and i think i have narrowed it down to a difference in the images:
quay.io/fedora-ostree-desktops/silverblue:37
ghcr.io/cgwalters/fedora-silverblue:37
to test I created a repository https://github.com/graybush/testFromContainer and generated two images.
ghcr.io/graybush/cgwalters:37-20230311
ghcr.io/graybush/fedora-ostree-desktops:37-20230311
each image has the same containerfile with the exception of the FROM statement.
to start i have a fully updated, yet stock silverblue:37 install with an nfs mount defined in /etc/fstab that is mounted upon boot. when i rebase from a fully updated fedora/silverblue:37 base to the cgwalters image my nfs mount mounts on boot as expected. but when i rebase from the same fully updated fedora/silverblue:37 base to the fedora-ostree-desktops image my nfs mount does not automatically mount on boot.
what am i missing?
GitHub
GitHub - graybush/testFromContainer
Contribute to graybush/testFromContainer development by creating an account on GitHub.
87 Replies
i feel like these troubleshooting things are well suited to threads
so, to be clear, the nfs mount works fine on both images... only on the one, it won't AUTO-mount at boot, right?
i've got a VM i don't mind rebasing to your images
right. from the fedora-ostree-desktop image the mount does not automount, but if i do a
sudo mount -a
it mounts just fineno messages in journal about this?
not sure, i've been looking at this all day and might of forgotten some things, sorry!
i just spent hours debugging a problem which was a combination of a missing path in /var/log, and then when i attempted to create it, there was a systemd race condition ( my ordering wasn't right) and then finally an incorrect selinux context...
so I'm sure it could be super tricky, but there's probably SOME clue,
i am going to put my vm back on the fedora-ostree image and i'll look at the journal
at this point i am pretty confident that if you have an nfs mount defined in the stock silverblue then rebase between those images in one you'll see the mount auto mount just fine and in the other nothing, that is pretty much as far as i've gotten. it's been a journey thus far, lol
when looking thru the journal
journalctl -p 3 -xb
the only errors i see are:when i look through the journal for the cgwalters image i can search for the mount point
/mnt/nas
and i can clearly see it mounting. but when i search the journal for the fedora-ostree-desktops image there is nothing, i dont see an attempt or error, just nothing.sorry i disappeared... one child kicked another one off the couch, causing the one kicked to cut open her elbow sigh
accidental kicking, thankfully š
ahh no worries, mine go to bed at 10, now I have free time!
yeah, they were up too late
but that was nearly a 45 minute ordeal LOL
nice
ok, so... seeing /mnt/nas work in one journal and NOTHING in the other... that seems of interest
ok! so i am not crazy
like... if the same set of packages are installed, then maybe one is configured differently.
what's the fstab nfs mount look like?
pretty basic at this point
cool, i just want to try with same options
no, i really appreciate the second set of eyes. i've been on this all day, it took me a while to narrow it down to something about these to base images
yeah, this is definitely part of the 'experimental' nature of things... it's hard to know where the bug is or if it's just part of the "new way"
does walter's image have some automounting package or some other convenience thing perhaps?
trying to fix a networking issue on my desktop so i can test this the way i want
left and right, you can see on the left systemd
var-lib-nfs-rpc_pipefs.mount
and var-mnt-nas.mount
, then nothing on the right. that is basically what i've observed until this point on the cgwalters container things automounting, on the fedora-ostree-desktops container nothing...i am coming up blank as to what to look for, i dont see errors in the journal or audit.log...
ok, finally got my network bridge setup properly... never should have tried to use the GUI š
it just caused problems
rebasing to you cgwalters-based image
yeah i feel like i need stand up another vm, i keep rolling back and rebasing back and forth between images
what is systemd-boot-unsigned?
i do not know
the Container files I used to generate the images are exactly the same with the exception of the FROM statement
ok, i just noticed that this was added as something i didn't already have
random: any reason you aren't using ublue-os/main for your base? or is it exactly this issue since its' based on the image which doesn't automount nfs
ublue-os/main ultimately comes from fedora-ostree-desktop
i would like to use ublue-os/main
the contianer i would like to use is graybush/simsulator, it is for my daughter and it uses ublue-os/main
but i keep the home dirs and some data on an nfs server
so today when attempting to switch her computer over, i ran into this problem
my main workstation has been using silverblue ublue since december
yep makes sense
happy to be another set of eyes, this is the kind of thing i want to work too
yeah so in debug i traced the problem down to when the base for ublue-base switched, so i created these images to get right to the problem
which was about jan 24th i believe
ok, i have an nfs mount in my fstab... and mounted manually.
192.168.129.10:/data/files/music /mnt/music nfs4 defaults 0 0
and.. i'm on
ostree-unverified-registry:ghcr.io/graybush/cgwalters:37-20230311
and... i'll reboot and see what happensi switched my main workstation over to silverblue in dec, since that time i have had 3 problems, a freeipa problem, this nfs automount issue, and another systemctl --user issue
mounted, auto-mounted, which you expect
other than that, silverblue has been a game changer for me
yeah cgwlaters mounts everytime
yep
now if all you do is rebase to the fedora-ostree-desktop image it will not automount
k
which may or may not be a big deal, unless you want to keep home dirs on nfs
well, i should say i would typically rollback then rebase
i'm just pinning and rebasing
is there a fundamental difference between pinning and rebasing vs rollback and rebasing?
i would think they are the same,but to make sure i rolledback
well, pinning just lets you ensure that older builds don't get replaced as you upgrade/layer/rebase
i'm not sure what you mean by rollback exactly
rolling back a VM?
after rebasing and confirming that the mount didnt work i would
rpm-ostree rollback
then reboot, then rebaseah
i do use the pining feature, just wasnt sure if there was a difference
i don't think rollback is needed
gotcha
not for this anyway
it's more like "oops, i layered something i don't actually want and rollback is fastest way to revert" maybe?
i've never used it š i'm guessing here
ok, rebooted into fedora-ostree-desktops... comparing rpm package lists and journal logs
i would think pining would be fine, but just to make sure i would rollback to the stable fedora:37 ver then rebase
as you say, only bolt differs in the package list
yes
for me journal logs didnt show any errors, just an absence of mounting the nas
ok, yeah, i'm going to clone the /etc dirs on the respective images and check that next... it looks like something has to be different in the config
clone the /etc dirs, I do not follow, what do you mean? how?
well, /var and /etc are the mutable bits in this setup
but... /etc is layered ... so your mutable /etc has default stuff underneath
so i'm just storing data between reboots
i'm actually ssh-ing into my VM as root and doing thigns since it's easier
but i ran these commands so far
and that's stored in my home-dir for easy searching
and for etc
yup, that makes sense
cp -a /etc etc-fedora-ostree-desktops
and cp -a /etc etc-cgwalters
but then i have snapshots to compare
conceptually, not literally šyup I follow, makes sense to me, thanks for explaining
definitely different contents in /etc
whoa, i'm shocked, but... i dont' see a difference
how did you compare the two etc's?
i mean there are a couple diffs, but nothing obvious
so, back to journal
on cgwalters, this is something
that happens separately from nfs-client mount of the actual mount point
rebooting to check those out on the other
oh š yeah, that's simple
nfs-client is not enabled
oops, i missed the
.target
you are too fast for me, are you saying all i needed to do is enable the nfs-client service?
i did that and rebooted, still no automount, but i think we must be closer
partially yes
what we are discovering here is for some reason cgwalters' image has services enabled which are disabled on fedora-ostree-desktops
got it
i did notice that
var-lib-nfs-rpc_pipefs.mount
does not run in the fedora-ostree-desktops image, but i wasnt sure of the root cause
this seems wrong
ok, yeah... that's the 2 targets you need:
then your mount should occur, and reoccur on reboot
just verified with a reboot here, too
i concur
thank you!
you're welcome! it's super weird to me how these "identical" images have different defaults... clearly they are being built differently
can i download this thread?
i'm not copyrighting it if that's what you mean š
i am going to have to go thru this again just to make sure that i appreciate your debug efforts
i just want to fully understand how you got from here to there
it's just gray beard stuff
yeah this is great
i honestly just thought those two images were the same when the switch over happened
like literally, i have a gray beard, LOL i'm old (not too old) but i've been doing it a while...
i've been doing this a few years, dont have a beard yet though, probably cant grow a full one though!
so, i'm old school enough that i didn't really "know" how systemd does nfs exactly, i know it would control it, but i didn't recallhow exactly
so, I started with the easy to check stuff... which you helped with initially by noticing the difference in journal logs
then confirming those rpm package lists
then comparing /etc files
now that ruled out differences in files like /etc/nfsmount.conf ... but... it may have also been a bit misleading since it didn't check for missing or dangling symlinks... and systemd uses those a fair bit for "enabled" or not, and service/target relationships
anyway, then i decided to study the good journal in more detail and then inspect the status of the nfs related services where were running in it
so a key thing i paid attention too on the cgwalters side
Loaded: loaded (/usr/lib/systemd/system/nfs-client.target; enabled; preset: disabled
it's nfs-client was enabled but that's not default... as the preset is disabled
but when enabling nfs-client
was not enough to automount, i just happened to remember that remote-fs
is a thingyeah those are excellent points, i am wondering if i would of looked closer at the systemd units that load the nfs mounts if i might of caught this
i def know about nfs-client, im not so sure about remote-fs
yeah, the most direct path would hav ebeen seeing "oh, nfs-client is not running"
then doing status on it... but more than just running status
that clearly shows you that the nfs-client.target interacts with remote-fs.target
yes it does
more specifically, enabling-remotefs.target may be all we need? i will test, now i'm curious
thanks so much for looking at this
i've been looking at this all day and was only able to narrow it down to something with these two images
i am not sure i could of taken it to its completion like you did
thank you
you're welcome! just share the knowledge as you can
will do!
btw
answered the question...
all that seems to be needed is
systemctl enable remote-fs.target
it seems to force nfs-client auto-mounts on reboot even if nfs-client.target is disabled
regardless, having both enabled won't hurtsounds like I could add that line to my git's Containerfile and it should resolve this issue ... good to know
yeah, that would do it, you'll see a number of
systemctl enable FOO
lines in the ublue-os Containerfiles (or the scripts they call, like
https://github.com/ublue-os/main/blob/main/post-install.shyup, make sense. i just forked https://github.com/ublue-os/startingpoint so I will probably add it there
GitHub
GitHub - ublue-os/startingpoint: Starting point for making your own...
Starting point for making your own OS image (WIP). Contribute to ublue-os/startingpoint development by creating an account on GitHub.
š