Changing image base from Ublue images to vanilla Silverblue - what is the best approach?
Hi! I have a question, if I wanted to migrate my custom image's base from ublue's
silverblue-main
to Fedora's vanilla Silverblue, what would be the best way to do so successfully?
I've tried doing so several times in the past w these steps:
1. Create a custom Silverblue image in my repo with quay.io/fedora/fedora-silverblue
as base-image
. This creates custom-silverblue-base
2. Builds just fine with the signing module
3. Update my existing result image recipe.yml (let's call this result-image
) - base-image
is replaced from ghcr.io/ublue-os/silverblue-main
to gchr.io/username/custom-silverblue-base
4. result-image
builds just fine with the signing module
5. result-image
is deployed just fine on the target machine, boots & runs as expected
6. When running rpm-ostree update, it fails giving errors as such:
Some digging around made me think perhaps it has something to do w/ files under /etc/pki/containers
and/or /etc/containers/registries.d/
, but sudo ostree admin config-diff
doesn't show that the files have been modified locally. Also, my cosign keys are unmodified, so this is not like the case of Bluefin's cosign keys from last year.
I wonder if it would be more efficient to just create a new repo altogether?
Would appreciate any input on this!Solution:Jump to solution
Gave it another test - it seems like the following approach as I've described above, or simplified:
1. Create
custom-silverblue-base
in REPO
2. Create result-image
using custom-silverblue-base
as base-image
, also in REPO
3. Deploy & continuously update/check for updates using rpm-ostree update
will give the Error parsing signature storage configuration
errors without fail....6 Replies
I'm not sure why this issue happens, I migrated from
silverblue-main
to silverblue
just by changing the base-image
& it worked fine without any problems.
Maybe it would be useful if you can give the link to your image, to see if there's something sticking out
If you don't want to bother fixing the issue & can easily migrate to the new image, you can also make a new repo & start againOh I've actually reverted to my
silverblue-main
config as of right now LOL. I'll give more tries over the weekend to see if I can find out what's the possible cause, this isn't the first time I tried. This time I initially thought it was bc cosign
pkg was missing during the initial switch from silverblue-main
to silverblue
base, but running custom-silverblue-base
on Boxes gives no problems and it updates just fine.
Will update here more once I've gone back to the silverblue
base config. Or yeah I guess I'll just eventually build a new repoSolution
Gave it another test - it seems like the following approach as I've described above, or simplified:
1. Create
custom-silverblue-base
in REPO
2. Create result-image
using custom-silverblue-base
as base-image
, also in REPO
3. Deploy & continuously update/check for updates using rpm-ostree update
will give the Error parsing signature storage configuration
errors without fail.
However, this approach works:
1. Create result-image
using fedora/fedora-silverblue
or fedora-ostree-desktops/silverblue
with a set of recipes that would have gone into custom-silverblue-base
2. Deploy & continuously update/check for updates using rpm-ostree update
works fine and doesnt' give me the parsing signature errors.
So for now I've resigned to making a base-recipe.yml
that layers basically everything that will be shared across images (repo files, bling, codecs, layers/removals of unneeded fedora packages, extensions etc), and just from-file: base-recipe.yml
it in multiple image recipes. I've only tested on one machine though, will give it another try on a diff machineThe 2nd approach works fine! It was probably an issue of the custom base image and the new/result image coming from the same repo hence having the exact same keys? Either way I've found a workaround!
i'm not sure how the 1st approach would cause a signing issue, it is common practice to sign all images in an org for example with a single signing key
one problem i do see is that if building from the same repo the base image and result image builds might start concurrently, which means that the result image would use an outdated base image
you would have to use a custom github action/workflow to make it work as intended
I actually had
base-image
and result-image
on different workflows & jobs, so base-image
being outdated wasn't a potential issue. IME the keys failing to parse happened all the time though, which is puzzling to me. It's no longer a problem bc I think the workaround works just as well, but I honestly am still a bit intrigued lol. Will give it another try when I have the free time (and motivation)