1994 Replies
Hello world!
@j0rge could we possibly pin this? As a longer term thing, it would be helpful for it to not get lost.
ack as soon as I figure out how
we have an older thread in here that might make more sense
but whatever, just get the info in here, we'll figure it out
Oh sure! That works too!
@αchilleaς for visibility 😄
(I'm out this morning, couch isn't going away so doing the doctor thing)
@hippiehacker will be interested in this (he's in NZ time zone so that'll be async)
Sounds good! Hope you feel better soon!
hey hey
👋
alright so update on my side:
This PR makes it possible to do what we need: https://github.com/osbuild/images/pull/322
It's stuck in review (waiting for Simon to fix the git history) from before the holidays but It Works.
GitHub
anaconda: container installer by supakeen · Pull Request #322 · osb...
Based on: #315
This introduces a new image that installs a bootc bootable container with Anaconda.
(Please pass my regards to mvo, I worked with him for a long time at Canonical, I was so happy to see him working on osbuild, feels like reconnecting with an old friend!)
Once that's merged, adding the ability to composer becomes very straightforward and what you'll be able to do is build an ISO that embeds an oci ostree-based container image with Anaconda configured to install it. There's some bits that aren't configurable completely (yet), but we can figure those out. Like maybe we can add an option to make the installer fully unattended vs manual
Oh cool!! I will. Yeah Michael's great. He really hit the ground running when he joined like a month or two ago.
He says Hi back :)
I'm so excited about this! 😄
This is great news @achkts !
fully unattended would be cool if you were doing a mass deployment of machines. I was talking with j0rge and kyle last night and making universal blue a full fledged chromebook alternative would be really cool!
fully unattended is a bit hacked up right now: https://github.com/osbuild/images/pull/322/commits/7ea7d819847a173f75d63b3aeed19f5b2f105206#diff-12a483f85526bab16de723964940dbc6eb71dce708019f7a60c5fcc4f0d478faR367-R377
we're writing a fixed kickstart file onto the ISO because we need this for an internal project. The plan is to have the options in osbuild to do this properly and then we can control things more sanely
for sure! I don't think Silverblue even has unattended installs yet does it? I know CoreOS does.
Yeah though I guess the idea is that, since Silverblue uses Anaconda, you can always add your own kickstart for that
I've not used Centos/Rhel based live ISOs before. Is there a way boot the container itself straight from the USB / sans install? (quasy related https://askubuntu.com/questions/1330368/live-usb-with-persistence-vs-full-install-on-usb)
Ask Ubuntu
Live USB with persistence vs full install on USB?
I know this get asked a lot, but because things change very often with a new version of Ubuntu every 6 months, I'm curious if anything has changed regarding this question.
I'm currently using Live ...
I'm sure there's a way but might not be straightforward. The payload for installation and the deployed root tree are quite different. I guess you could always just have both on the same ISO
There's definitely live CentOS ISOs. I don't know if they make live RHEL ones though.
I don't believe RHEL does live ISOs if I recall correctly.
I think you just have the option to install or boot into a recovery shell.
yeah
and there's the DVD ISO that famously does not fit onto a DVD :D
yeah, they really should rename that "Offline ISO"
indeed
so, a burning question in my brain, this PR
can we use the new web installer? or is it still the old hub and spoke?
It's the old one (now)
we could add the new one though. It's just a different package and a few config changes I think
I think we would help with that? have you seen it? it's so good!
cc @KyleGospo (but he's on the left coast, might not be around a while)
we've hit 5m pulls without reliable installer, I think if we helped out and pushed this one final bit we could go from the worst installer in linux to the best one. 😄
this is so awesome!
ok another question, for netinstall options, the issue we had with it in the past is it wouldn't retry, so if the network connection wasn't steady it would fail right away, we've outright just stopped recommending it.
I think? I saw some work in anaconda but I get confused on what is for the offline thing and what is for the other cases in anaconda
and also do we know where in anaconda we'd do the flatpak installs so we can throw away a bunch of this dotfile jank? (That's more of a question for Kyle and RJ)
the new web installer? Yeah, we have a live installer in osbuild-composer that uses the new UI that the Anaconda team use to test it.
awesome
Hope so 😄
Thanks so much for the update, it's been great watching the git storm upstream, I just needed a high level explanation of what's happening hah. I hope to buy beers next time I am in europe!
The best incentive
It appears the PR got merged a few hours ago: https://github.com/osbuild/images/pull/322
GitHub
anaconda: container installer by supakeen · Pull Request #322 · osb...
Based on: #315
This introduces a new image that installs a bootc bootable container with Anaconda.
@KyleGospo Do what must be done, lord vader.
Thank you @αchilleaς and team!
@αchilleaς is there going to be an upstream post or announcement? Like do we know if mattdm knows about the installer?
I don't think we're doing an announcement. Right now this is going into https://github.com/osbuild/bootc-image-builder/
Which is meant to be a new thing for RHEL 9.4.
GitHub
GitHub - osbuild/bootc-image-builder: A container for deploying boo...
A container for deploying bootable container images. - GitHub - osbuild/bootc-image-builder: A container for deploying bootable container images.
ok so this should be where we start? I'm going to play with this all weekend heh.
With the merge into images there's not much to announce right now since it's not a "product" yet. Getting it into composer needs a tiny bit more work.
Though if it lands into bootc-image-builder, you can use it there directly
like, looking at the examples: https://github.com/osbuild/bootc-image-builder/?tab=readme-ov-file#-examples
GitHub
GitHub - osbuild/bootc-image-builder: A container for deploying boo...
A container for deploying bootable container images. - GitHub - osbuild/bootc-image-builder: A container for deploying bootable container images.
gotcha, I'll think of a way to tie it into fedora upstream so that people are clear that "this is actually just fedora", something that gets a win for the marketing folks, heh
once it lands there, you should be able to do:
and get an ISO out ready to go
(the
--type iso
is what's missing now)ah!
and those can be signed images?
signed images should work
that would be pretty slick!
Let's actually try it now with a qcow2
a direct signed image install would fix one of the jankiest parts of our experience
I also would love this for building test VMs with qcow2!
or like cloud workstations you could run in Amazon or Azure.
AWS AMI is already there too and it can also upload it directly after building and convert it to a ready-to-boot AMI
Oh I bet we could ucore right into aws. 😄
I'd like to try with one of your images and see where (if) it fails
we could offer AMI, qcow2, and ISOs.
I have about an hour before I need to go and I'm done for the week so I'm going to play with this a bit now
yeah and then in github we could just host the netinstallers (these ISOs will be way too huge for github to host)
watching it build now though, seems to be chugging so far.
the nice part is, any machine with podman can make an ISO right?
with
ghcr.io/ublue-os/silverblue-main
?
yupI'm doing
ghcr.io/ublue-os/bluefin:gts
we even test on macOS with podman machine
yeah that's so nice because iso hosting will be annoying for us, so just telling people "you can also generate your own ISO with this podman command" would be amazing
so for the cloud native nerds, this would easy, might be a hoop for traditional users.
though it's about as many hoops as using a tool like rufus or DDing an image to a USB.
oh man, if we could just make it
just geniso ghcr.io/blah
then every image will have the capability of making ISOs from anywhere, a cool shortcut.This could also be nice for folks who want to share their image with others as well.
yeah
"how do I run your image? I have a spare laptop." Run this one command on any machine.
because hosting the ISOs will take time, I'm going to have to call around
we should probably toss them on whatever popular torrent service is out there
moment of truth
👢 it!
how do I use this qcow thing, virt manager?
Virt Manager would be the easiest.
yeah or qemu. there's an example in the readme
oh, remove
-snapshot
to persist changesSo if I understand correctly, this is going to just boot straight into the image? It's not going to attempt opening the anaconda installer correct?
boo!
OVMF option is trying to boot it with secure boot right?
I'm still a total noob with qemu 😄
yeah this does it directly into an image. It's (more or less):
1. pull container
2. create an os root
3. run ostree container image deploy ...
4. create a disk file
5. partition the disk file
6. mount the partitions
7. copy the os root into the mounted tree
8. set up the bootloader
9. qemu-img convert the disk file to qcow2
yes
oh no, that's just uefi, not secure boot
almost done building it too. I'll poke around the disk once it's done, see what's up
❤️ yeah my qemu/kvm knowledge was purged long ago hah
I've become a daily user. Which mostly means I have qemu scripts and a tonne of lines in my shell history to refer to when I need something :D
@αchilleaς are you building the bluefin image or just the base ublue silverblue image?
building the one Jorge used:
ghcr.io/ublue-os/bluefin:gts
I'll try building the base silverblue image and see if we get different results.
hey wait
if it does this with -qcow2
what is the -iso flag gonna do, that'll spit out an anaconda iso?
yup
anaconda iso with the container embedded and a kickstart file configured to install that when it's run
WOW
so one command can do ISOs AND the vm images.
@bsherman I think you will love this for ucore!
and with this command (same tool): https://github.com/osbuild/bootc-image-builder/?tab=readme-ov-file#%EF%B8%8F-cloud-uploaders
you can directly upload it to your aws account and convert it to an AMI
GitHub
GitHub - osbuild/bootc-image-builder: A container for deploying boo...
A container for deploying bootable container images. - GitHub - osbuild/bootc-image-builder: A container for deploying bootable container images.
@j0rge just double checking, what did you create for the config.json file?
I didn't I double clicked on the disk image file?
the thing could totally work and I just suck at virt-manager
you can skip the config option if you don't need one
I got the same error as you did @j0rge . digging a bit
ahhh ok, so the config.json file is not strictly required.
good to know.
that image is F38, I am assuming it doesn't matter in this case?
we also include bootc on our images
We tried to cover everything in the README, so give it a read when you get the chance
oh right, let me get spun up
I think it should work with F38. What doesn't work is other distros, like CentOS. We're working on that.
I'm totally cool with jumping into voice chat with folks if we think that would help at all.
I've got a lunch today and need to finish off work stuff, but plan on spending all my freetime on this until we figure it out
I can do a few mins
@dogphilosopher I think we found that you're the perfect translator between us.
and then when we play Halo at night you can translate hahah
you know where we can help?
is there an osbuild action?
like maybe we can bust out a github action that people can just enable in their workflows
I'm looking around the disk and it might be useful to compare it with what your bootable disks look like
or like, follow the "official" deployment guide and compare the outcomes, you know?
GitHub
GitHub - ublue-os/isogenerator: An action to generate custom ISOs o...
An action to generate custom ISOs of OCI images. Contribute to ublue-os/isogenerator development by creating an account on GitHub.
ok lunch here, will catch up async
@αchilleaς this is going to be so awesome, thank you so much!
I have to go in about 20-30 mins
are you just wondering what the filesystem should look like when it's installed on hardware?
I have Bluefin:latest installed on my laptop, we can poke around at it if you'd like.
exactly
yes
let's get a screenshare going.
🗣️ tty?
oh, you're in bazzite already
@j0rge building a qcow2 with
ghcr.io/ublue-os/silverblue-main
works. it boots!oh wow!
have to go now but we should also try adding users with the config
and the --iso flag should land sometime next week I gather?
good morning from the land of missed-fun-conversations
gonna play with this here shortly
once I'm out of this meeting
yup
perfect, so we can rev on the workflow stuff and get ahead of it
nice.
Yeah, @αchilleaς and I just finished our working session. I am going to try building the silverblue-main qcow2 image including the user config to get a default user working. Then I will try bluefin:latest and bazzite:latest and see if I can get those to boot.
We're wondering if because Image Builder is using Fedora 39 build tools to build things, that it is having trouble building the bluefin:gts qcow image
If it is the build root/tools, we have plans to make it so that it uses tooling from the repos of the target distribution, so that it can build Fedora, CentOS Stream, and then RHEL. Ideally it would detect it from the container itself or we might add labels to the containers to make it easier.
for sure. currently building silverblue-main with the config.json file. fingers crossed
yes, the config file does work!
gonna try running latest bluefin next.
Try all the things
teaser tweet?
targets I am going to try at the current moment:
1. Bluefin:latest
2. Bazzite:latest
3. Ucore:latest
ucore is coreos based not silverblue, I am very interested in what happens
exactly! I figured those are the major 3 for Universal Blue, we can always test all the other ones to make sure they build.
I suppose the deck image and bazzite desktop should be tested separately @KyleGospo
trying bluefin crosses fingers
the deck image is just some extra services on top of desktop
so if desktop works I have little reason to think deck won't
🙂
it works! @j0rge
O M G !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
there was a goofy thing that happened when it was booting, but I think it was normal? It created 2 different boot options in grub.
and had to do a fsck on the disk.
yeah it's a known thing, our initramfs script will make a new deployment for the hardware
so it looks funny
the fsck is for sure an issue though
yeah, not sure what's happening there.
I can share the scripts I'm using. As long as you have qemu, you should be able to do it.
you can obviously add more resources to the scripts if you want too
Gonna test bazzite next.
please
@αchilleaς So it does work on bluefin:latest
so it must be at least in some way related to the build tools it's using.
Stolen directly from the docs:
Step 1
Step 2
You'll want to obviously run from a directory that has access to output.
NEAT!!
Base minimum config file:
Bad password!!! 🤣
building bazzite crosses fingers
I was telling @αchilleaς the osbuild team basically solved a problem I was complaining about months ago for doing local testing of images in VMs with this tool.
can this access a local container registry or does it have to be one that is online?
I have bad news about bazzite. the build failed... 😦
any specific error?
yes
lemme post it.
Coming soon
Gianluca is working on local registry pulls
I think it may not be making the qcow2 disk large enough?
Ah yeah that must be it
would make sense, Bazzite is probably the biggest image here lol
the dots are just like repeated things.
I think I added size config but can't remember if I finished it
there are lots of no space left on device and cannot create directory errors.
There's an issue already open for it I think. If not, feel free to write one on bootc-image-builder on GitHub
I'll take a look
so as suspected, bluefin works I just suck at virtmanager?
no! the GTS image does not work.
the latest one does!
GitHub
Dynamic root partition size · Issue #52 · osbuild/bootc-image-build...
I'm trying to build an image bigger than 2GB with bootc-image-builder. The fixed size is specified here: https://github.com/osbuild/bootc-image-builder/blob/aaa2f5b/bib/cmd/bootc-image-builder/...
maybe this one?
Yeah
Easy fix to make it part of the config
that's caleb, he hangs out here, NZ timezone
cool, so big showstoppers at the moment are:
1. Size of qcow2 images needs to support larger than 2GB (or just be dynamic) -- Bazzite won't build without this.
2. Images that are based on anything other than Fedora 39 don't seem to boot after being created
2. ISO feature implemented for building bootable ISOs
3. Write up a github action to generate ISOs, qcow2 images, and Amazon images
4. Declare flatpaks in the anaconda installer. Solving that would let us delete a ton of jank
5. NICE TO HAVE: Use a local container registry for local testing and development of VMs
Where would I find the anaconda UI that's being built?
want to look into customizing it 🙂
I will go ahead and try building Ucore next and see if it works.
yeah imo let's skip the 90s and go right to the new installer
but I'm a 90s kid!
it's like that scene in fast & furious where the engine is about to blow up
@αchilleaς similar error using ucore:latest:
This is using Fedora CoreOS 37 as base OS
Filed an issue: https://github.com/osbuild/bootc-image-builder/issues/88
GitHub
Unable to build images that are not based on Fedora 39 · Issue #88 ...
Hey folks! I have attempted to build some images using this tool for the Universal Blue team and we are unable to build images that are not based on Fedora 39. The 2 images I have tried: ghcr.io/ub...
Cheers
Gonna link this at the bottom again, just to give a rundown of where I think we are at on this.
Another thought I had for this is we could build a workflow using qemu-convert to create HyperV and VMWare images: https://docs.openstack.org/image-guide/convert-images.html
unless bootc-image-builder is going to support those formats in the future @αchilleaς
VMware is definitely on the plan because it's one of our biggest targets for RHEL customers
And azure and Google cloud
sick!
everyone gonna hoover up vmware's old customers hah
Who is VMware? I've heard of broadcom 😉
yeah get your dunks in, cloud provider consolidation and less upstream contributions to k8s and the rest of the stack will suck
@dogphilosopher oh, we still need to figure out how to declare flatpaks in the anaconda installer. Solving that would let us delete a ton of jank
sounds good. Added it to the list. Should this maybe go into an issue somewhere?
I don't know how developers do things XD
yeah maybe we should do a roll up installer issue? tonight I'll clean up some issues in main, update the project board, maybe do a "where are we now?" status.
Yeah, that's a good idea.
looks cool... i've missed a lot with family stuff past week...
what specifically are you imagining? a pre-built ucore qcow2 image?
yeah, throw them right into amazon for anyone that wants to use them
and then as they add backends to the thing publish images on all the clouds
It would be nice to have images published to the cloud and then you could use butane files for the last mile of configuration.
similar to what they do on the core-os website.
you live rent free in my head, we want all of those things. 😄
"why is my desktop not like coreOS" is something I've been whining about for like 10 years.
The thing is... We're making it happen!
Automation and laziness is why I work with Ansible 😄
@Ondřej Get in here
Heya!
Question! ublue doesn't use bootupd yet, right? I want to switch bootc-image-builder to using bootupd asap. Happy to introduce a switch for our "old" bootloader setup, though.
yeah we don't deviate from fedora in that regard
is it coming in fedora for F40?
No idea. Bootc-based containers is a new initiative and we are still figuring out how to merge it with existing ostree-based Fedora spins.
I'm always down to try anything new. 😄
Anyway, I will tell Michael not to drop the current bootloader setup when he implements bootupd.
But I'm super curious whether we can use the new web ui for ostree installers.
Once Simon finishes his work, we should definitely try it, that would be super cool.
I am very curious also. Because if it is, we'd go straight to it
I'd love to just go right to the future
Indeed! 🙂
this is so great, I've been watching the git logs with great interest over the past 3 months
I mean, for ostree you basically just need a disk setup, right? You don't need networking, because the installation is disconnected. Extra packages don't make sense.
Users are handled via DEs, right?
At least for Gnome I know they are...
actually, do you know what the deal with flatpaks would be?
since we can't put them in the build step we have a bunch of janky scripts to install on first boot, we'd love to remove those.
Like I install firefox in the background, etc.
Ondrej, let's do flatpaks!!
Why isn't it possible to do it in the build step?
rpm-ostree/flatpak limitation
You mean when making the ublue image I guess?
right
yeah
rj hacked a way in, but we ran out of github builder space quickly, so not an option
but if someone had a private builder you could throw a bunch in there
yeah
GitHub
[Feature idea]: /etc/flatpak/preinstall.d · Issue #5579 · flatpak/...
Checklist I agree to follow the Code of Conduct that this project adheres to. I have searched the issue tracker for a feature request that matches the one I want to file, without success. Suggestio...
Well, anaconda cannot install flatpaks, right?
it does upstream, it installs the fedora runtime, the repo, etc.
a stock silverblue install: first half of the progress bar is rpm-ostree, the 2nd half is it iterating through the flatpaks and installing them.
that issue has a few ways distros do the flatpak thing.
Oh didn't know about that
yeah, we're firmly on the "whatever you come up with owen we'll do."
for downstream custom images that'll be really key too
Ah, cool, so we would need to embed the flatpaks in the iso, right?
yeah, I am not sure if they're in the iso upstream or if it's snagging them directly from the flatpak repo.
I would imagine both are possible but let's talk with the installer folk and figure it out
in the net installer I'd love for it to just grab from the flatpak repo because you'd have the browser and apps ready to go, no need for an update on first boot since flatpaks usually update often
but a nice bonus with the flatpak update is at least it's a diff, heh.
But having an offline ISO with your flatpaks embedded and ready to go would be really useful too
oh for sure everyone will want both hahah.
I used to run linux labs at an .edu around ~2004. I wish I had a time machine to bring this tech back.
this solves so many problems.
@j0rge one other thing while I'm thinking of it, what is the hack Ublue is doing where we needed to update the initramfs on first boot?
To the average user, I see an error message and would worry it didn't install correctly or something.
Then my computer rebooted and things are fine 😄
That's a Kyle/eyecantcu question
It came from bazzite
I'm just wondering if the new installer can help with that.
@KyleGospo @EyeCantCU
I'm unfortunately not aware of any other way to enable the initramfs or set kernel arguments from the get go. I'm unsure if the new installer would be capable of that or not. I'll have to catch-up. Looked at this on Thursday and things look absolutely fantastic but it looks like there's a lot of other stuff I need to read through as well 🙂
Yeah! There is a lot to catch up on lol
Reading it now 🙂
it's required for a number of things, including luks
with an installer it could be avoided for a number of scenarios
but will still be required for some
Okay, all caught up. Wow! Things are so close! The work that has been put into this is absolutely amazing. I need to play with this stuff here in a minute
Just to clarify a little bit - we actually do already set kernel arguments out of the box. All of them should just be there for the Steam Deck post install. Other hardware variants like systems that have SI/CI Radeon GPUs get the kernel arguments added by the script after the fact once we know for sure what hardware the system has
Reading this, I don't think this is something we can get rid of post install or abandon (as people rebasing will need these changes). And I don't think it's that big of an issue. Windows has show messages during startup for a while now so us doing the same here (albeit much more direct about what's being done) isn't that big of an issue - to me at least. I don't mind explaining to users why things are handled this way
However, if we can do this in the future, that would be a huge win. We'd need to be able to figure out the hardware in use during installation, adjust kargs, enable initramfs from the get go, etc
Thank you all so much for working so hard on getting this into a significantly better state!
Ope Kyle beat me lol
just to let the group know I have whined to ostree upstream about how having the kernel args in the container file or part of the build step would be an awesome thing to have.
mongo think many word do less mean much
Haha. I get stuck on them sometimes
Yes please, that would be great
This would be amazing to have
Here's the use case example: https://github.com/ublue-os/framework/pull/54
Maybe Framework would benefit from Bazzites hardware script?
Mario is the upstream (AMD) to Framework. If the OEMs can declare the correct boot args for their device and be able to do that on a per image per model basis. That would be the win right there.
"This exact laptop is running the exact configuration AMD and Framework recommend." Boom.
That would be a dream
AFAIK Colin Walters wants to put everything into a container. Kernel args, bootloader settings, partitioning..
sounds like my kind of party. 🙂
Thinking about it more - aren't kernel args already handled by a static config when bootupd is used?
I think you can just amend the bootupd config file in the container and you are done.
I think they could be added here but not sure
https://github.com/coreos/bootupd/tree/main/src%2Fgrub2
I think so.
Well, anaconda recently got support for bootupd, so I think there are no blockers for the change. https://fedoraproject.org/wiki/Changes/FedoraSilverblueBootupd
F40 hype!
I'm going to have to look more into this!
fedora's going to go from one of the most not-awesome desktop installers to the best one. No question. 😄
"We can start planning for the removal of the ostree-grub2 package from the images to solve the "all deployments are shown twice in GRUB" issue."
this just keeps getting better, heh
Nooooo lol: "... bootupd will not manage anything related to the kernel such as kernel arguments; that's for tools like grubby and ostree."
I think this makes a stronger case for somehow managing these directly in the Containerfile
I think the "ostree" part implies it'd be in the containerfile right?
I believe so 🙂
heee heee, I feel like a kid in a candy store
No promises, I just wanted to see whether the installer can already handle this. Yup, it can. 🥳
that looks so good!
@αchilleaς is there a specific issue for the --iso flag that we should be following? Looking through the open PRs and issues and nothing's jumping out at me
I was able to find this open PR: https://github.com/osbuild/bootc-image-builder/pull/58
GitHub
iso: be able to build the ISO by supakeen · Pull Request #58 · osbu...
Needs: osbuild/images#322
I also did not see a specific issue though.
There's a Jira ticket on issues.redhat.com about it
Let me see if it's public
Sounds good, even if it's not public, just let me know what it is and I'd like to follow it internally.
oh this has all the goodies, thank you!
ENJOY!
Yeah! I linked that in our issue silly 😄
indeed, I have come full circle, I was reading the issue and then saw the cross link
this is going to make a great talk someday
@Kyle Gospo supakeen switched to zstd upstream
yessss
that's huge!
or ... small!
Is this for compression of the images?
Just for the squashfs of the installer iirc
The container image is outside I think/hope.
@EyeCantCU I'm in Boulder Colorado now, we should get together. Pop up to my offices at Pearl and Broadway sometime soon
btw, just to make sure we are on the same page: The compression was xz before. I asked supakeen to change that to something faster, because xz is super-slow and I primarily care about people generating the ISOs and putting them on a flash drive locally. I guess that for ublue xz might be something to consider - the ISO will boot a bit slower, but the artifact would be smaller which might be more important for downloads.
Lemme know, I'm happy to implement a switch between fast/bad and slow/better compression.
But I guess this isn't a blocker, the ISO will work with both xz and zstd. So maybe we can keep it zstd for everyone and if people start to complain too much that the ISO is huge, we can start thinking of bringing xz optionally.
Either choice requires us to find hosting since it doesn't fit on github anyway, I don't have a strong opinion.
Would the netinstall anaconda also be generated from the same workflow?
Because that one we can host on github
Looks like this might merge today? https://github.com/osbuild/bootc-image-builder/pull/58
GitHub
iso: be able to build the ISO by supakeen · Pull Request #58 · osbu...
Needs: osbuild/images#322
Netinstall is something that we don't target right now. Should be quite trivial, but not a priority.
It merged! @j0rge @Kyle Gospo
Thank you @Ondřej and @αchilleaς! Thank the entire team on our behalf as well!
Really appreciate all the work you are doing.
You are welcome!
There is still a ton of little tweaks that could be done, but hey, it builds something that's bootable and can install stuff.
You will likely be hearing from us soon 😛
That's a good problem to have.
when we write this up I'll make sure we share the drafts to we document the work that you've accomplished, this is awesome.
I'll be out of a meeting to test shortly.
I'll probably start with bluefin:latest since that built properly in the qcow2 image.
Starting build crosses fingers
it built, I saw a few errors along the way.
gonna still try and see if it boots.
wait how would it build, I thought there was a gig limit?
there's like 2 other issues right? or is this the only thing we needed at this time?
(sorry I'm in and out)
So unfortunately, it doesn't just work.
I'll post the output of what it logs when trying to build Bluefin and the command I'm using to do it.
Command I am running:
I'm making some assumptions here @Ondřej @αchilleaς
One of the main errors I saw in both Anaconda and the logs was related to disks.
How did you launch the iso?
yes, this is the anaconda screen I got. It also cannot connect to my network bridge either.
But how did you boot the machine? Is it a qemu with an iso and an empty disk attached?
I'm using virt-manager.
I have a log of the build.
but discord won't let me upload it for some reason X_X
I would be interested in the anaconda logs, the build apparently went fine.
There was a few things I saw in there that made me sus about if the actual build worked..
I'll try to link some of them real quick.
Is this command correct in the way I'm using it?
Yup
OK cool, at least I got that part right 😄
All these things in the logs are expected.
So, how did you set up your VM? An iso and an empty disk drive?
yes. I also had to pick an operating system and I selected Silverblue 39 for Virt-Manager
as the type.
if you wanna try qemu, this is how I would launch it
sure, I can give that a shot.
Fancy guy with uefi.
right, you can skip the
-bios
bit if you want BIOS ;)
I just pulled that from my shell history tbhseems to be working with qemu...
I wonder what I did wrong trying to set it up in virt-manager.
It might be the kickstart that bib embeds into the iso.
it's not done yet, so hold your breath for a little longer 😄
Maybe it doesn't work with the disk setup that libvirt creates.
Maybe we should start testing either virt-install rather than qemu. 🤔
(I just prefer qemu for quick, transient VMs).
I can hit yes, I just figured I would ask first.
Well, it won't probably boot.
Silverblue doesn't have bootupd yet.
Another difference from what we test against.
oh... so the fedora image you guys are testing with has bootupd?
Yup
@Ondřej but anaconda/ks is doing
ostreecontainer ...
and that doesn't need bootupd, does it?It doesn't. But it's not tested neither on our side, nor in anaconda.
right
... efi boot target it says... you're right that it's likely something missing or different about the base container image. An Anaconda log would be useful.
just out of curiosity, I'm going to try build the ucore image which is based on CoreOS and see if I get a different result.
I expect better outcomes because of bootupd.
I'm happy to investigate tomorrow. Especially the disk error is a bit concerning.
for sure! I'm happy to help test and work with you guys on this!
I know it's getting late in your timezone 😛
I should head home
You still in the office? Go home!
you work in an office!?
@Ondřej @αchilleaς feel free to ping me here or on Slack if you'd like to work more on this tomorrow!
I'm testing ucore now
Unfortunately, same result:
Interesting, thanks for trying.
No problem!
Anyone else willing to test this out and see if they can get any farther than I did, by all means please do! I gotta figure out how to pull Anaconda logs.
You're a saint! Time to grab lunch though 😛
Enjoy!
Created this issue: https://github.com/osbuild/bootc-image-builder/issues/114
GitHub
Unable to install from generated ISO · Issue #114 · osbuild/bootc-i...
Hey Folks! Really appreciate all the work going into this project! I was testing the new ISO feature implemented today and I am running into issues booting from the ISO it creates. I have tried 2 d...
oh the 2gb limitation is the qcow2 ones, gotcha
can we/should we track netinstalls in this tracking issue?
Let me talk with the folks tomorrow and see if they would like me to open an issue in bootc-image-builder.
Just curious if that's too far out as far as features are concerned.
Have you had any luck finding potential hosting options?
yeah I am still fuzzy on what features come "for free" and which ones need more dev work.
nope, work is taking up all my time rn.
no worries!
so I suspect a bunch of "here's the podman command, generate your own, start testing"
Yeah, hopefully we can at least have a general idea of what is causing the anaconda installation to fail.
The ISOs are generating, but something in Anaconda is where the hang up is I think.
I'm just not that experienced with troubleshooting ISO installation issues using Anaconda 😄
is anyone? hahahah
yeah the iso size is why I also ask about the netinstaller.
ISO it generated for me currently puts Bluefin at 3.8GB
oh wow that's much smaller than I expected
nice.
the problem is a bunch of these mirror services want a service running for them to sync from. For example fcix wants a mirror manager setup so that they can 2 pass rsync from
What's the size limit for release sizes on Github?
couldn't we just put it in there?
2GB
X_X
so close
I see why you want NetInstaller 😄
Realistically, we wouldn't have to do releases that often though.
because we have the auto updating.
yeah for 2 reasons, a) we can autogenerate netinstallers when we release with the release-please action, which is awesome, you just haven't been around to see that happen yet.
so storage requirements might not be outrageous.
yeah, I would do one after every major kernel release personally
which is ... way more than you think, hahaha
compared to upstream fedora, which does it once per cycle.
but bazzite needs the offline one more than the netinstall because of the very nature of the device.
exactly!
and the offline one is something that would be handy for ... the support folks at Framework kinda thing.
yeah, so I do think we would want both.
and we wanna do things like use one of those usb services to splat the offline installer onto usb keys for conferences, etc.
Right!
but what I love about this is anyone with podman can make an iso.
Live USB would be really cool, but that feels pretty far away.
so like, having
just geniso
so people can make ISOs to help testing while we find resources is fine, since we're waiting for F40 anyway.true!
and it may very well be that akdev's wasabi account can handle both offline ISOs
but I'm not confident in an old sunset plan from a vendor reselling S3 for a fraction of the actual cost either, hahah
Right, I really only think Bazzite, Bluefin, and UCore should be offline ISOs
I think getting rid of Bluefin-DX and having a toggle is probably the way to go.
keeps it really simple that way.
and devs are the kind of audience that having them run a command to go into dev-mode is fine.
oh for sure
offline for the major three, if you want an offline one of your favorite image, here's a podman command and you can build it on the spot.
and since it's podman you can run that on other OSes as well
exactly!
@Noel ftr, I created an issue also for the kickstart insufficient error: https://github.com/osbuild/bootc-image-builder/issues/116
GitHub
Kickstart insufficient
when running with libvirt · Issue #116 · o...@noelmiller reported in Discord that when the ISO is run using virt-manager, the installation fails on the Kickstart insufficient error. Interestingly enough, this error does not happen when he use...
Awesome! Thank you for doing that.
We will investigate, the ISO not working under libvirt is quite concerning.
Did you see this as well?
GitHub
Unable to install from generated ISO · Issue #114 · osbuild/bootc-i...
Hey Folks! Really appreciate all the work going into this project! I was testing the new ISO feature implemented today and I am running into issues booting from the ISO it creates. I have tried 2 d...
yup
Thanks for posting all the logs.
I will look into anaconda's sources to see what the hell it is doing. 😄
The hero we need.
Some day over beers I will tell you my friends' and I attempt to dive into Anaconda
Quick question which image did you use to test the ISO installer with initially?
quay.io/centos-bootc/fedora-bootc:eln
I can try using it in virt-manager in a bit and see if it fails.
that would be lovely 🙂
(I'm fairly sure it will fail, the error happens way before anaconda does anything with the payload)
Doesn't hurt to try I suppose.
yup
chm... We hardcore an EFI partition in the kickstart currently.
If virt-manager doesn't use EFI by default, then anaconda might be confused.
Oh, maybe I should try booting with EFI mode? I'm pretty sure it does that by default, but maybe not.
It would be useful to know if the error happens both under BIOS and UEFI.
@Noel I just checked, it's a BIOS-only bug.
The kickstart currently always creates a /boot/efi partition, which might confuse anaconda.
But that's just my wild guess, I don't want to step on supakeen's toes.
That would definitely confuse it I would think.
just to send a quick update also here (not just on github): ucore:stable has an old bootupd. ucore:testing should work
No progress with bluefin yet.
Also: bios boot is generally broken, but it's a simple fix 🙂
You are so awesome! We will check and see if we can put bootupd in our main repos on a test branch.
Maybe that will resolve that dependency.
ucore:testing installed successfully from the ISO.
@j0rge @Kyle Gospo Hi folks, what is the most basic container image that I can build locally to smoke test adding bootupd into the list of packages?
probably ucore/coreos
how about any that are based on silverblue or some variant?
I suppose if I can find the container file for our upstream, I could try building on that.
sadly any change just makes it bigger, so best would be stock SB from quay
and then only add that package
I'm a dummy. I could just do a from statement couldn't I? lol
yeah FROM + RUN
and then just do a DNF install as another layer.
sure. that will get the job done for now.
would need to be rpm-ostree install
but you got it
2 lines
I just realized I may not be able to run it from a local registry, but I could upload my smoke-test container to quay or something.
or just built it in github for free and grab from ghcr.io
ez pz
why can't you use a local registry? certainly effort to setup, but curious what actual hard limit you see?
I opened an issue on it, that might be only related to creating the qcow2 image though come to think of it.
🙂
then push that to any registry you like
You can spin a temporary registry if you don't want to push anywhere
Y'all rock.
registry - Official Image | Docker Hub
Distribution implementation for storing and distributing of container images and artifacts
as a reminder, I am an ansible guy, not an Openshift or Kubernetes guy 😉
podman build -t localhost:5000/my-image .
podman push localhost:5000/my-image
You just need to run bib with --net=host
--net=host?
sorry forgot to reply
Add --tls-verify=false before localhost:5000/xyz
no dice. I wonder if I need to create the registry container with basic HTTPS instead of http?
why using
sudo
?Nah, you need to pass --tls-verify=false to bootc-image-builder, not to podman. 🙂
So move it after --type iso
what the docs say to do ask @Ondřej 😛
🙂 i realized after I typed it's needed for
--net-host
at leastdoing more than it was before.
I guess that building an iso could be done rootless. 🤔 I need to try it at some point.
Building disk images rootless is way more tricky because you need to mount block devices. :/
question @Ondřej, is it possible to have the image builder not ship a kickstart file by default? Only reason I ask is it skips all the options for setting up users and such.
it's mostly helpful in our use case
ok, moment of truth, gonna try and use the ISO it made.
We want a fully unattended installation by default. But it's definitely possible to implement a switch for this.
I'm just thinking because most people who install their computer want to set a user account, network settings, etc.
nice GUI wizard that Anaconda is.
also one other thing that differs (which is probably fine?) is that it uses a large EXT4 partition by default rather than btrfs.
I don't know if that's a dealkiller for anyone here or not.
not as simple as we were hoping.
the default provided kickstart file uses EXT4?
it seems pretty clear adding one's own kickstart file would be important, i have no doubt that's part of the plan or already possible
a switch would be awesome, such a versatile tool
this is a different error than what I was getting previously.
is there anything more useful in the console?
I can pull the anaconda logs from this guy
also angry man shakes fist at cloud you are about to get me to attempt to replicate your efforts
Here is all I did for my container file.
I get paid for making sure that you can make an iso, dd it on a usb drive, plug it into a server and bam! You have an installed system. That's why the focus was on unattended so far. 🙂
bootupd might need some configuration. Not sure, we will be figuring out the next week.
no worries! I appreciate you stopping by in the chat at all!
I might get some folks to obsess over this just a bit 😛
@Noel we could compare fcos:testing vs silverblue+bootupd and see if there are differences
I would definitely like to have support for partitioning in the tool. However, there are about billion ways of doing this, so I'm not sure where to start.
The ultimate goal is that the partitioning is defined as a label on a container image and bib or installer just uses that as a source of truth.
it's good we have ucore so we have something to compare to.
So TLDR is:
1. Bluefin works with qcow2 image creation, but does not work with ISOs yet
2. Bazzite doesn't work with QCOW2 at all because it's too chunky and doesn't work with ISOs for the same reason as Bluefin
3. Ucore:testing works with ISO creation, but not with QCOW2 image creation because it's not Fedora 39
We tried adding bootupd to Bluefin, but it still doesn't work, but gives different error
failed to write bootloader configuration
@Noel start by finding what bootupd is complaining about.
i've been reading this and thinking... curious how it that works with ignition
maybe it doesn't matter
doesn't ignition happen after the OS is booted up?
yes, that's why maybe it doesn't matter... i know butane/ignition can do some repartitionaing and formatting, too so maybe on the coreos side, at least, even the ext4 default is a non-issue
i'm not sure, just thinking out loud
yeah. I'm just wondering if this will be nice for people to use rather than ignition files.
ok, i am trying to get setup to test these scenarios
like, create your image into an ISO, unattended install, bam.
but i LIKE ignition 😄
If you are free, we can drop into voice chat for a bit if you'd like.
We want to achieve this without ignition and coreos-installer.
so is the long term plan for bootc, bib, etc, to have a common solution for atomic desktops and coreos which deprecates ignition/coreos-installer?
I'm not. Need to be with kids.
sorry, I meant to respond to bsherman 😛
i can drop into #tty to try to get setup
gimme 2 minutes, BRB.
This is a tough question. The focus of he bootable container initiative that we started is primarily at servers/clouds. I'm fairly sure that we want to use anaconda as the installer instead of coreos-installer. We want to focus as much as possible in defining stuff in containers themselves, so ignition might get actually redundant.
Note that coreos is actually quite separated from this iniative. We are actually working hard on proposing a Fedora change for building new Fedora containers that are lighter than coreos, contain bootc, but no ignition, the installer is anaconda instead of coreos.
yep, I hadn't been following this too closely, but I had started to assume much of what you just stated. I almost exactly said "ignition is starting to look redundant"
and the config "blueprint.customizations" looks very much to be based on the existing rhel/openshift composer stuff
Also, this will be container-based from beginning. We want rpm-ostree to go background there. Bootc ftw.
seems like a pretty good direction from my perspective. i'd like to see more unified behavior between atomic desktops and atomic "server" (coreos)
I like it as well. 🙂
@Ondřej
@Noel You can also consider running this line in your Containerfile:
https://github.com/CentOS/centos-bootc/blob/main/tier-0/bootupd.yaml#L31C49-L31C49
GitHub
centos-bootc/tier-0/bootupd.yaml at main · CentOS/centos-bootc
Create and maintain base bootable container images from Fedora ELN and CentOS Stream packages - CentOS/centos-bootc
/usr/bin/bootupctl backend generate-update-metadata /
If we need to poke someone to update bootupd in fedora I can do that, just lmk
we are talking about this in tty
ack
we have this version of bootupd in our test silverblue image
https://packages.fedoraproject.org/pkgs/rust-bootupd/bootupd/fedora-39-updates.html
bootupd-0.2.16-2.fc39 - Fedora Packages
View bootupd-0.2.16-2.fc39 in Fedora 39. bootupd: Bootloader updater
I'm in calls for the next hour minimum
no worries
i have reproduced this, too
in other news,
ucore:stable
works as a bootable qcow2 image with this bib tooldid you try the ami one?
ucore on amazon marketplace tonight. bsherman buys a boat on monday.
I use latest. and latest isn't a thing!
https://packages.fedoraproject.org/pkgs/rpm-ostree/rpm-ostree/
This is what we are currently testing as we realized one difference between Fedoral ELN (rawhide based) and Fedora 39 is a newer rpm-ostree
still get this error with the newer rpm-ostree
https://discord.com/channels/1072614816579063828/1192504002252914791/1195472637065957477
quay.io/centos-bootc/fedora-bootc:eln
RUN touch /etc/default/grub
we got a silverblue-main:39 ISO built!amazing! 🙂
How did you find about that /etc/default/grub is needed?
we found the logs on the ostree admin failure and it was caused by a failure of
grub2-mkconfig
we don't understand why it's needed since the fedora-eln container image also did not have /etc/default/grub
present... so it seems that maybe something else is failing to generate the file on Fedora 39 vs ELN?is reading
ostree
sources.
chm, grub2-mkconfig
shouldn't fail without /etc/default/grub
, there's something weird going onAgreed
Yeah. When I have some time, I'm going to add all we learned today to the issue we have open. Seeing it booted up, we had some other issues which we can hopefully collaborate on.
I just need to eat dinner and be away from my computer for a bit 😭
touch grub > /my-belly
Ah!
/etc/grub.d/15_ostree
(a symlink to /usr/libexec/libostree/grub2-15_ostree
unconditionally sources /etc/default/grub
Ah, silverblue comes with ostree-grub2
, but fedora-bootc:eln doesn't have it!Is this an easy fix?
lol, ostree-grub2 was dropped from coreos 4 years ago
GitHub
Drop ostree-grub2 · coreos/fedora-coreos-config@fb1e4b0
Since we switched to having the BLS be canonical and use the
ostree.bootloader=none
config, we don't need the 15_ostree
stuff.
Drop the ostree-grub2
package and that cruft, finally a...tale as old as time
honestly, I don't know 🤷♂️
we wasted weeks trying to figure out why the deck's GPU was numbered differently on Fedora's kernel
only to find in the issue tracker that the switch to simpledrm consumes card0 at boot and "this will be ok and not affect anyone"
(it broke everything)
we can try to build a silverblue image without
ostree-grub2
and see what happens ...the benefit is that the setup is more similar to coreos
the drawback is that having a different bootloader setup that upstream has is scary
silverblue guys want to have bootupd in F40, but that thing is stuck: https://pagure.io/workstation-ostree-config/pull-request/403
but I guess now that anaconda has a proper bootupd support, this might actually be possible (last time they tried, anaconda blew up)
anyway, I'm building an ISO from unmodified ublue/silverblue. Anaconda should work even without bootupd, so let's see what's wrong.
interesting stuff... i'm inclined to push forward ... at the very least to test things
basic question, will the bootc-image-builder make it's way to cockpit-composer or is there a seperate ui for osbuild?
Would just be an
rpm-ostree uninstall ostree-grub2
right?
Added to our container file?already testing it...
rpm-ostree override remove ostree-grub2
You rock.
There will be a brand new UI in Podman Desktop. https://github.com/containers/podman-desktop-extension-bootc
GitHub
GitHub - containers/podman-desktop-extension-bootc
Contribute to containers/podman-desktop-extension-bootc development by creating an account on GitHub.
We might introduce the same features to cockpit-composer or hosted Image Builder later (console.fedorainfracloud.org!), but nothing is set in stone yet.
yeah I saw the thread on the upstream forum
it would be incredible in podman desktop, I'll go talk to mo duffy. 😄
huh, where? 🙂
well, it's already happening, so stay tuned! 🙂
I apologize for sending this to you on a friday
hahahaha
you mean the thread? 😄 No worries, I saw it.
I'm glad someone took me seriously when I said "help us delete most of the project" hahah
the podman desktop part, that's smart, whoever thought of that one gets a 👍 from me
yup
also, it's available everywhere...
Want a fresh image, but you are on Windows? Just download Podman Desktop.
yeah
I didn't even think about that!
Oh man
I might have found out why the installer crashes when installing a bootupd-less container.
The installer determines the OS name for EFI from a buildstamp, which is a file in the ISO that
bootc-image-builder
creates.
The product field is currently left empty.
Which is exactly the one that installer needs.for the record, building an ISO with bib from this containerfile DOES work:
Yup, this one-line patch fixes the installer with an unmodified
silverblue-main:39
:
https://github.com/osbuild/bootc-image-builder/pull/118GitHub
bib/image: set the product name for the ISO by ondrejbudai · Pull R...
Anaconda uses the product name in the buildstamp when calling efibootmgr when setting up the bootloader for containers without bootupd.
Without the product name, the bootloader setup just crashed. ...
Recap before I go to bed:
Bios boot will be fixed by https://github.com/osbuild/images/pull/360
Silverblue (and other images without bootupd) will be fixed by https://github.com/osbuild/bootc-image-builder/pull/118
ucore will fix itself once the new bootupd version hits the stable branch
GitHub
bib/image: set the product name for the ISO by ondrejbudai · Pull R...
Anaconda uses the product name in the buildstamp when calling efibootmgr when setting up the bootloader for containers without bootupd.
Without the product name, the bootloader setup just crashed. ...
gn!
good night!
thank you so much for all the help!
Good night. You are absolutely amazing!
I've merged the product name fix. All these silverblues and friends should just work now 🙂
I'll take a look at this now and let you know! Thanks so much for working on this!
Yes, with no changes to our Containerfile, Bluefin installs.
@Ondřej 2 questions:
1. I know we discussed possibly removing the default kickstart file in order for folks customize their storage and networking. Should I open a separate issue for that?
2. Is it possible to have rpm-ostree point upstream to our actual signed image? Currently it is pointing to /run/install/repo/container by default as you will see in the screenshot
Yup, feel free to open issues for both. #2 is actually something that we will definitely want to fix.
For #1: I suggest focusing more on what features you need rather than the concrete technology (kickstart). So if you want to be able to interactively configure partitioning, users, whatever, just make a list. 🙂
For sure! I just know that users are used to using Anaconda to do that interactively. Specifically in our use case, we want to ship the ISO and have users be able to do configuration during install.
I'll open a couple issues for sure on this!
I'm very interested in what should be actually interactive in the installer.
(I want you not focus on kickstarts, because if we just remove it, the installer will most probably break.)
got it!
that makes sense.
@Noel I think we should make more of a spec and less a "list of issues", should we fire up a hackmd and then add that to the issue?
I can work on this today but we're currently buried in snow so I need to unbury heh
we can! I'm free all day, so lemme know a time and we can try to bang this out.
testing Bazzite now.
Do I need to log in?
I see an option for logging in with github
probably? I use my github account
oh yeah we have that on for spam reasons
@j0rge I'm in tty if you want to chat.
kid is making a ton of noise playing games, I'll join when I can and we can partner up
so issue for Bazzite (or any KDE) is that it doesn't have a first run wizard.
so creating user needs to happen in anaconda or something.
@Ondřej So I tried using a ISO I made and it is claiming kickstart insufficient in BIOS mode:
oh weird.
I get the error, but when I click on installation destination and hit done, it works.
or it at least tries to start, I haven't completed the install yet...
Yup, bios is still broken.
OK, it actually did work, I just have to click on installation destination, hit done and it installs properly.
so it's not picking up something for whatever reason.
What is happening is that the kickstart currently doesn't specify a partition for the 1.5 stage of grub (that is needed only for bios boot). Anaconda correctly recognizes that as an issue, so it complains. Once you click on the destination, it probably just fixes the partitioning under the hood.
Awesome! Thank you so much for explaining it!
For #2: I think the deployment command has an argument,
target-imgref
that Anaconda/kickstart doesn't expose. Or maybe they do now? I think we were talking about it at some point with Hippie. It would certainly be a great to install from the disk but "subscribe" to the original location of the image, or even a custom location.Let's ask Simon to take a look on Monday. This is a fairly major bug.
Also I have no idea how to distinguish between a verified and an unverified registry. Maybe we will need to loop in Colin.
yeah, not sure about that either
ok here's what I have so far.
Could this be the "Container URL" and a "Container Remote" not being properly set?
Perhaps the "Container URL" is the local instance of that container, but as you suggested, "Container Remote" is the subscription to the upstream registry.
I see in the osbuild/images repo you are passing in
Definition: https://github.com/osbuild/images/blob/main/pkg/osbuild/kickstart_stage.go#L102-L109 Implementation: https://github.com/osbuild/images/blob/main/pkg/manifest/anaconda_installer_iso_tree.go#L366-L373 (Trying to understand the terminology used everywhere)
/run/install/repo/container
as the container URL but nothing ("") as the container remote.Definition: https://github.com/osbuild/images/blob/main/pkg/osbuild/kickstart_stage.go#L102-L109 Implementation: https://github.com/osbuild/images/blob/main/pkg/manifest/anaconda_installer_iso_tree.go#L366-L373 (Trying to understand the terminology used everywhere)
Hmm, afaik there are no plans to use the new webui for Silverblue in F40. The change targets workstation only. We can certainly do it in ublue 40, but that's deviating from the upstream.
I am willing to investigate that to it's fullest potential. 😄
Those map to these: https://pykickstart.readthedocs.io/en/latest/kickstart-docs.html#ostreecontainer
The URL is set, that's fine. The remote is just a name though, not a URL or ref.
If you look at the
ostree container image deploy
help:
That's what we need to call. Unfortunately, Anaconda/kickstart doesn't support setting that option:
https://github.com/rhinstaller/anaconda/discussions/5197#discussioncomment-7423903Let's talk to the installer folks next week. Worst case scenario: fix it in %post
Yeah, spoke to Jirka about it at some point and I got the impression that it's not an easy change in Anaconda. Seemed like we'd have to do it in
%post
.
Silly Ondrej. What you doing here at midnight?Ah.. I was hoping it might be something I could contribute. Adding an option to the struct and feeding it through, I could handle. Not changing the installer.
Will look at some kickstart things instead.
My younger one refuses to sleep.
I know that life.
I know how that feels. I refuse to sleep too.
@j0rge made some slight tweaks to the spec doc. Mostly just language and sentence structure.
OK I lied. I added a few things I remembered 😄
yeah toss it in the issue
otherwise we'll meander on it
In our issue or an upstream one?
I think ours is fine, unless ondrej wants it in a specific place?
Cool yeah, I added it to our issue already.
Something that may be worth adding is being able to specify default user groups after install. We currently have some hacky oneshot service which adds wheel users to other groups, but it would be good to do this from the installer.
Like "I want the initial user to be added to the incus group"
I believe anaconda does support adding user as an admin user. I can add this to the spec though.
Something like incus might not work unless we add the group first.
But with the ISO, the group could be created from the Containerfile though? There will be no existing /etc/group file that will conflict with the groups we ship as it's not a live system yet
That’s a good point
Oh. That being said, we will still need a way to update existing systems if we add some new software that needs groups.
Maybe this can be ignored as we won't be able to rely on the installer all the time for this. It's the same with
At that point, we might as well not rely on the installer for setting that stuff up at all.
Maybe this can be ignored as we won't be able to rely on the installer all the time for this. It's the same with
/etc/skel.d
- we can use it for the initial install, but we will need a system for day-two management of default user files.At that point, we might as well not rely on the installer for setting that stuff up at all.
I think the one-shots/skel are more aimed at making things easier for ootb experience. I think they pretty much only exist in Bazzite/bluefin. The only one that comes to mind otherwise is the Nvidia container oneshot.
Because I got bored, I am testing out generating an AMI into my personal AWS account for Bluefin and see if it works.
ucore and bluefin both boot it seems, but I can't connect to them via SSH to verify.
bluefin won't be listening for ssh by default
Yeah, I thought about that after putting it up there lol
Ucore on the other hand.....
yeah, it's really weird.
I'm not an AWS expert, but it seems to me like it's not keeping the settings from the
config.json
file like generating the ISO or qcow2 does.
I also tried create the AMI with a ssh key and it doesn't allow connection through that either.weird
Yeah, it was really odd.
do they have ssh installed and enabled by default?
and was the failure only on aws? Like if you boot the same raw image locally does it work?
yeah. Same configuration for same image works in both qcow2 and ISO.
using the same user I created with the key.
huh
It's really odd.
yeah no idea but I'd love to see if I can reproduce and poke myself
not now though, it's like 1 AM :D
is it possible to keep the raw image to test from the image it's trying to upload to AWS?
lol go to sleep 😛
I can link what I'm doing in this chat though.
Link away!
config.json
It uploads and imports the AMI just fine, but I cannot connect to it, it's super weird.
I have created a single upstream issue (at @Ondřej 's request) with our spec for MVP: https://github.com/osbuild/bootc-image-builder/issues/133
ack, also, speaking about the anaconda UI
@Kyle Gospo do we still have the issue where the anaconda UI doesn't fit on the deck screen?
Would it be that awful to just hard set the deck images to have a 1200x800 or whatever it is resolution on the grub's linux line?
I think we did that before, but iirc the new installer would Just Work on the handhelds from a screen perspective without having to add in args for each device?
LCD deck is fine via karg
OLED deck is mega broken, but that's kernel in the ISO related
assume you meant "not ISO related"
technically is, if we could build an ISO with a custom kernel i'd be fixed
the problem is specifically that the linux kernel is missing patches upstream for the display orientation of the OLED
oh I see what you mean, nod.
so it'll also solve itself w/ time
I feel like we are making real momentum!
It's like in star trek when the ship is in spacedock and they're lining up the final pieces and now it looks like a real ship
Big hurdles now are having the image point to the correct upstream and more full anaconda support.
But the ISO builds!
Which is so exciting.
2gb limitation gone?!
I never thought we'd see the day where Linux ISOs would surpass the size of a Windows installation media ISO but maybe, just maybe...
Qcow2 image build was where that issue is.
ISO builds fine.
That's sweet! This entire time I thought it also impacted ISO builds
Didn't get a chance until very recently!
Found an ISO gotcha, not for the RHEL guys but we'll need to handle it
We very likely want to sign it with our kmod key, but skip the existing key removal process we did to the kernel
For devices like the deck which ship with no secure boot keys at all, you'd end up in a position where the installer would cease working after enabling it
That way it'll continue to work for them even if they take the step of enabling it when they probably shouldn't
In that case, should we just leave the keys without stripping them? An option could be added to the action to skip it. I don't think having the keys just be there is a concern other than a cosmetic annoyance
I think the removal is good for the kernel as we don't sign stock fedora kernels, and let's say the key for the surface kernel is stolen one day -- we're safe because we resign
We're the sole party in charge of the keys
For the ISO, definitely keep originals since that'd be the actual Fedora keys ideally
why would you strip pre-existing signatures?
oh, maybe this is the reason
hmm
Yeah. Kyle's explanation is spot on
Getting closer. Looks like we will be able to select the root file system: https://github.com/osbuild/bootc-image-builder/pull/106 and if I'm understanding this PR correctly should use the buildroot of the container rather than what is in OSbuild: https://github.com/osbuild/bootc-image-builder/pull/138
also, discussions around disk partitioning in general: https://github.com/osbuild/bootc-image-builder/discussions/147
@tuk.n @achkts It seems an update to bib broke ISO installs for our images. So far silverblue-main and onyx-main don't work. I'm gathering logs now...
@travier This is the thread I was talking about when we had our conversation earlier.
Gonna test our ucore image next.
Here is a screenshot:
logs incoming
build_iso.sh
test_iso.sh
error I get when attempting to install using the ISO manually (using UEFI in Virtual Machine Manager):
error I get attempting to install using bios mode:
no issues generating qcow2 images
just for posterity, I'm going to test the quay.io/centos-bootc/fedora-bootc:eln just to see if there is a different result.
no change to the behavior using ucore.
OK, ELN works. something with our images it does not like 😦
Something in eln
Let me get you a link.
GitHub
test: add container installer test by mvo5 · Pull Request #109 · os...
Add a new integration test that checks that the container installer
is working. The contaner installer just does an unattended install
of a disk. The test will run qemu with the installer.iso and a...
This PR should have fixed it
Just merged now
I will check when I get home
OK cool, looks like things are working again
\o/
Take a look at this: https://github.com/containers/podman-desktop-extension-bootc/blob/main/README.md
@achkts @tuk.n When you have time, no rush, but bootc-image-builder appears to be broken for making our images again
Warning I got right before:
Error it failed on:
Command I used to build:
Ok we have an ISO distribution method now, so netinstaller support is no longer critical for us.
(TLDR Cloudflare R2 is a wonderful thing)
are there examples of other config.json's so we can use that to match the fedora defaults?
The majority of options are not yet able to be passed through from config.json. It's a select few, like user and network configuration
I’d like to make sure bootable USB (that persists) stays an option. Particularly for folks to give things a spin without having to install to a drive.
I don't think that's a "stays" thing I think that's a "needs to be implemented" thing
Is there a location too see what options are passed thorugh?
👀
Was just digging that out. The struct with options can be found here
https://github.com/osbuild/images/blob/main/pkg%2Fblueprint%2Fcustomizations.go#L9-L29
GitHub
images/pkg/blueprint/customizations.go at main · osbuild/images
Image builder image definition library. Contribute to osbuild/images development by creating an account on GitHub.
Thanks!
yeah so usually if you don't pass it options it asks you interactively, but when I just made one it was all automated and just blows away the first disk.
I made one without a config.json
Unfortunately the kickstart options are all hard-coded for now, but the maintainers say it's something that will change soon (no timeline). Will dig out the relevant code
GitHub
images/pkg/manifest/anaconda_installer_iso_tree.go at main · osbuil...
Image builder image definition library. Contribute to osbuild/images development by creating an account on GitHub.
In theory, we could probably just delete or change that kickstart file in CI in the built ISO
ah ok, so if we pass it an empty one like how we currently do it then that should be ok?
right, that's what I'm thinking
https://github.com/osbuild/images/pull/422
I'm not sure how complex the extracting, editing and recreating the ISO would end up. Unsure how much metadata is stored in an iso file outside of the raw files or if we can just untar and tar it back up again
just saw this
Not sure what to think. This goes way over my head
It may be something I saw upstream bootc
Will try to have a look starting Monday. We were out on Thursday and Friday, so I'm only catching up with things slowly.
No worries! I will be gone most of this week.
I successfully made an ISO and a qcow2 image today of bluefin
Oh great! Something must've gotten fixed between now and when I tried last.
Unless maybe it was an issue with me passing a config.json
I didn't use one
That's a good piece of info!
though, on my personal USB stick at home I'll have a fully automated stick so I can just boot it onto a laptop and have it autoblast the thing.
Be nice if we could have an automated test for UBlue and Bluefin in our CI >_>
(It's late and I'm sleepy, I am not making any commitments or promises right now :D )
it would look just like the Fedora one right?
yeah
you didn't go to fosdem?
I am indeed at FOSDEM
well, in the hotel room right now but you know
I have coworkers there, I'm going to be there next year though
If you all are in Brussels, get out to cantillon and delirium
We should totally meet up. I'm sure we'll be here again next year.
Wont be here tomorrow night, but Delirium is really nice yeah. Never been to Cantillon though.
I've been coming every year since 2017 (minus the COVID years)
I've been 3 times
I've heard it's really fun!
@αchilleaς I have a pic for you, one sec
Utter chaos, but I love it. It just turns into an annual reunion of fossy folk
Will have to see if I can go next year
Me with mvo in 2008! Tell him we've come full circle
This picture is so old I'm wearing a freaking Ximian tshirt lol
OH MY GOD
Should've sent this an hour or so ago when we were still at the bar :D
Same party, but with pitti!
wow!
Pitti didn't come this year unfortunately
I need to count the number of ex-ubuntu developers who contributed to fedora and ended up helping bluefin, that will be a fun thing to chase down
heh, funny that
Btw, do you also know Lars?
linus' roomate Lars?
yeah I had to share rooms with that guy because you had to share rooms with people at conferences and sprints because we were so cheap lol
no don't think so. Berlin Lars
Last name Karlitski but I think he had a different last name back in his Canonical days.
you had to share rooms with people at conferences and sprints because we were so cheap lolIt's so funny how often this comes up haha
Has anyone had success building the ISO / qemu disk from an Ubuntu host using podman? Any gotchas there?
I would make sure the podman version on ubuntu is up to date
but that's a general problem in general on ubuntu. 😦
What version of podman are folks using?
We are spinning up coreos 39.20240112.3.0 and having issues with podman builds.
Running
podman run --rm -it --privileged --pull=newer --security-opt label=type:unconfined_t -v /var/home/core/output:/output quay.io/centos-bootc/bootc-image-builder:latest --type qcow2 ghcr.io/ublue-os/silverblue-main:latest
Not sure if this is related? : https://github.com/osbuild/bootc-image-builder/issues/168#issuecomment-1922357339
GitHub
building a qcow2 from a derived image doesn't work throwing Selinux...
Building a derived image on osx with podman desktop and trying to create a qcow2 fails with selinux error in the pipeline. Here's the Containerfile used - note it doesn't matter if the non-...
I ran into the same thing recently.
I'm on the latest version of podman provided by Fedora Kinoite.
It may be that we need to install this https://packages.fedoraproject.org/pkgs/osbuild/osbuild-selinux/ ?
osbuild-selinux - Fedora Packages
View osbuild-selinux in the Fedora package repositories. osbuild-selinux: SELinux policies
GitHub
anaconda/ostree: Ensure we set target image reference by cgwalters ...
Basically anaconda/kickstart lacks support for --target-image which we need when installing from a container image embedded in the ISO. Work around it with an injected %post.
Closes: #380
this means the remote will be set right on boot? instead of the image on the disk? I think that's what this means
if that's the case then this one is huge. 😄
I think so!
I can test it when I get back home.
then if we can tell it to not go into automated mode so that users can just click through the installer that'd be way better than our janky thing. The flatpaks would still be janky but at least the installation experience wouldn't be awful.
Yup, but you users need to use bootc. AFAIK, it won't work with rpm-ostree.
what do you mean by using bootc, it's in the image already isn't it? or do you mean they'd be using the bootc command to manage everything from then on?
if you mean bootupd I think this is the spec https://fedoraproject.org/wiki/Changes/FedoraSilverblueBootupd
https://github.com/fedora-silverblue/issue-tracker/issues/530
I think that rpm-ostree and bootc use different config files for remotes
But I might be wrong.
I think we were supposed to switch at some point but I never quite understood when that was supposed to happen
Yeah, that's hopefully the plan for all image-based Fedora spins. 🙂
I also hope that https://fedoraproject.org/wiki/Changes/Fedora_IoT_Bootable_Container will unlock more potential for creating small derived images (and it starts with bootc/bootupd from the beginning).
coreos has this already doesn't it?
Isn't it fairly heavyweight? It has ignition, both podman and moby-engine...
yeah but nothing else, it's still a ton smaller than say, ubuntu server.
but silverblue/kinoite are built different, you can't really rebase from one to the other. Though every spec I read seems to indicate that they will all be the same, just not any time soon, heh.
The idea of the new bootable container is that it should literally contain just kernel+systemd+bootc. So in theory, it can serve as a unified base for everything. Would be quite cool for deduping everything, and moving more stuff to Dockerfiles.
I would love that!
I'm assuming with using just bootc, you aren't able to layer RPMs like you can with rpm-ostree correct?
it's supposed to but I don't think it does right now
if you try to
bootc upgrade
a machine with layered packages it bails
# bootc update
ERROR Upgrading: Reading manifest data from commit: Missing ostree.manifest-digest metadata on merge commit
mine's failing completely now that I try itmine still shows this
Yeah, they must be configure in different locations then. You could try validating with a bootc based CentOS image or the Fedora ELN image.
good call
Anyone up for taking a look at this later today with me?
I have a coreos box we are trying to use as the qcow build, but haven’t yet been able to make it work yet.
Unfortunately I'm swamped all day. I'd be happy to once I'm back on Friday.
I will drop a way to connect to the box into the channel before I board a plane to NZ today. That way we can be async but still be working together on the same box.
Thanks for any help you (or others) can offer!
Sure, are you using the CentOS bootc Image to start with?
CentOS-bootc and Fedora ELN are the most well supported images .
Here is the repo with our approach (to being out the hardware + coreos for building the image with podman): https://github.com/ii/corehost
GitHub
GitHub - ii/corehost: A Fedora CoreOS image with some development p...
A Fedora CoreOS image with some development packages built in - GitHub - ii/corehost: A Fedora CoreOS image with some development packages built in
Specifically here: https://github.com/ii/corehost/tree/hh/equinix-metal-ipxe-boot
GitHub
corehost/equinix-metal-ipxe-boot at hh · ii/corehost
A Fedora CoreOS image with some development packages built in - ii/corehost
I’m pretty sure the OS we end up using is specified here: https://github.com/ii/corehost/blob/hh/equinix-metal-ipxe-boot/ipxe.txt#L10
GitHub
corehost/equinix-metal-ipxe-boot/ipxe.txt at hh · ii/corehost
A Fedora CoreOS image with some development packages built in - ii/corehost
kernel ${BASEURL}/fedora-coreos-${VERSION}-live-kernel-x86_64 initrd=main coreos.live.rootfs_url=${BASEURL}/fedora-coreos-${VERSION}-live-rootfs.x86_64.img coreos.inst.install_dev=${INSTALLDEV} coreos.inst.ignition_url=${CONFIGURL} coreos.inst.console=ttyS1,115200n8 console=ttyS1,115200n8
initrd --name main ${BASEURL}/fedora-coreos-${VERSION}-live-initramfs.x86_64.img
set BASEURL https://builds.coreos.fedoraproject.org/prod/streams/${STREAM}/builds/${VERSION}/x86_64Are we using the right base OS to build from? Maybe we be using CentOS-bootc or Fedora-ELN instead… let me look into this. I think we assumed that a current coreos box would be enough.
Stuff built with BIB are most well supported with images using bootc. They happen to build our images currently, but with some caveats so far.
I'm starting to wonder if we should consider implementing what Fedora does upstream with Lorax to build our ISO.
This would allow us to mirror what upstream is doing. I don't expect Fedora 40 Atomic to be switching to bootc anytime soon, which makes it tough to use bootc-image-builder
This is more just a shower thought on my part. I like the simplicity of bootc-image-builder, but I worry rpm-ostree won't be well supported by the tool. I'm not saying we should make decisions either way, I'm just wondering if we should explore all of the options in front of us.
we tried that already
Tried in what way? Just too much infra to manage and set up?
that's the iso we have now
I found this post by Timothy: https://github.com/fedora-silverblue/issue-tracker/issues/415#issuecomment-1421309495
Right, but aren't we doing a live rebase? I'm saying the image is directly in the ISO with no need for a rebase.
Like it sources our image first. For net install from GHCR, for an offline ISO, the image is directly available to use.
if you think you can make it work, sure, go for it!
More of a science project at this point. I did meet up with someone at my conference that made it sound like he was getting around this problem using kickstart. I was gonna connect with him after to figure out exactly what he was thinking.
I'll just ask timothee
Yeah. I'm wondering if that's the best route. I can also get in contact with him about it.
I pinged him
My thought is we likely just need Lorax to build the image since all the hard work has been done for us.
Just saw.
@αchilleaς @Ondřej I had a question about: https://github.com/osbuild/images/pull/413
GitHub
anaconda/ostree: Ensure we set target image reference by cgwalters ...
Basically anaconda/kickstart lacks support for --target-image which we need when installing from a container image embedded in the ISO. Work around it with an injected %post.
Closes: #380
is this available in bootc-image-builder?
because when I tried with the fedora-eln image, it was not pointing to the correct registry location after install from the ISO.
Not yet AFAIK
Matrix conversation
https://github.com/osbuild/images/blob/41624568c089e4bd23106f21a98f595f930ad360/pkg/image/anaconda_container_installer.go#L53
I am of 2 minds on this.
I think it may be easier to adapt bootc-image-builder in the short term, but I'm wondering if we can band together and create a OSBuild pipeline that we can contribute to upstream with.
@Robert (p5) @M2 any thoughts on this? I know both of you have dug into the osbuild code a bit and understand it better than I do.
The majority of the changes will be in osbuild/images. BIB forwards configuration options onto that package, is returned the manifest stuff then BIB calls osbuild with that manifest.
We need very few changes to osbuild/images if my understanding is correct, but deciding the interface to expose in the package isn't really something we can do ourselves. We don't know if all kickstart options should be overwritable, if they want pre-defined configurations etc
If we wanted to hack it together, we just need to
Just tried it, and nope. :/
Not as easy as I thought
mv osbuild-base.ks osbuild.ks
and we're done (not tested).Also receive this error when un-ISOing and ISOing the ISO. My xorriso commands must be incorrect
copying this over from matrix: dogphilosopher castrojo unfortunately it's more complicated for me to get used to the code and I don't have more time to spent on that
however, here is the idea of what needs to be done:
https://github.com/jkonecny12/images/commits/main-add-atomic-desktops/
prettty unfinished and tidy but maybe that could help you
then you will just adjust:
https://github.com/osbuild/bootc-image-builder/blob/main/bib/cmd/bootc-image-builder/image.go#L162
this structure to set ISORootKickstart to True optionally based on parameter
I can try to move this forward later but not sure when I'm able to get to that, I should have been doing something else today but it was so interesting to play with a new project 😄 😄
@Robert (p5) To explain what is the intended outcome. It'd required to move kickstart from
inst.ks=
to interactive default and remove most of the lines from the kickstart file
This is the WIP commit: https://github.com/jkonecny12/images/commit/5d395870c46c13ada1ed463e33433d035066db84
so I have some ideas on how to do an offline installer image.
I talked with a coworker at Red Hat and we opened a new repo to work on a POC.
What we are thinking is this: https://github.com/JasonN3/container-installer/issues/1
at a high level, all the ostree installer is doing is ostree container image deploy
to deploy the image to a hard drive.
which I know we are currently doing with our current installer.
but what we could do is modify the ISO to include podman as part of the ISO image.
we could create a pre-populated registry with our image that are cached in that registry.
then we have ostree container image deploy
reference that location by updating the /etc/hosts
file to point to that location.
so we would have a cached image that is included in the installer that it references and we would still get the benefits of the user creation in anaconda.
if we do this method, we could also use the newer GUI UI for anaconda too, because we could edit the live ISO to include that and podman.
I actually think this might work.
I don't want to get folks hopes up, but this idea seems sound.
I found https://github.com/livecd-tools/livecd-tools and https://pagure.io/fedora-kickstarts
This allows you to create a custom live ISO.
and all you need to do is edit the kickstart file to add podman.
once we do that, we implement a systemd service that starts to pull down our image into a local registry as part of the build process of the ISO image.
then when it starts, it will spin up that registry and we can use ostree deploy to deploy the image from that local registry included in the ISO.
this avoids having anaconda pull down the image remotely which is part of our current installer woahs.
it's baked into the image itself.
and it should ostree deploy it from the correct remote and it be up to date immediately.but I'm wondering if we can band together and create a OSBuild pipeline that we can contribute to upstream withIf you feel bootc-image-builder will be too inflexible, I'd be in favour of this
yeah, OSBuild is just incredibly complicated and I'm not sure where to even start on that.
maybe others might though.
"others" meaning me? 😆
so like..
I mean let's work on this. I think we can talk about making a higher level tool like bib, or we can do it in osbuild-composer
that would be really cool if we could do that.
@Skynet
I do think at some point we're going to want to have container ISO builds in the service, and disk images, so the stuff that BIB does, which means it'll have to go into osbuild-composer. As for customizations for the ISO, I'm working on some of those right now. Namely the "unattended" option. It's already in osbuild/images, but I'm cleaning it up a bit to make it a blueprint option
@Noel it's important to remember that customizations and knobs on bib aren't a priority for now but they are definitely part of the plan
For sure. I think the tough part for us currently with the whole thing is bootc-image-builder has a strong focus on supporting bootc based containers which fedora is not using.
and from the sounds of things upstream, they don't intend to support bootc in the next release.
The Fedora iot bootable container is a bootc container compatible with bootc-image-builder
it pains us because we have real users that value the changes we make in both bazzite and bluefin, and they can't be consumed in a simple way in regards to the installation.
IOT is definitely similar to Silverblue, but they aren't the same.
yeah. Just curious if Silverblue and friends might pick up bootc as well
we can carry a new bootc, that's not a problem. The bootc in fedora is from october of last year. 😦
huh, really?
yeah, that's "ancient"
I guess Colin's been busy
we were messing with it last night so we just happened to be in the neighborhood
when I installed both bootc and bootupd and tried to build the image with bib, it failed.
tested that last night.
in a containerfile?
this didn't work with BIB
I was testing to see if the
bootc switch
Try adding:
at the end
https://github.com/CentOS/centos-bootc/blob/76d079d6c3c3b83786924e434edc500fef130e22/tier-0/bootupd.yaml#L31
GitHub
centos-bootc/tier-0/bootupd.yaml at 76d079d6c3c3b83786924e434edc500...
Create and maintain base bootable container images from Fedora ELN and CentOS Stream packages - CentOS/centos-bootc
that's right! I forgot about the magic script.
bib uses bootupd to setup the bootloader so the metadata needs to be there
Hey everyone, i'm currently working on creating a bootable image based on kinoite, i've created a container image which installs bootupd in attempt to create a qcow2... running into this heres the podman command: any ideas on how to move past this? thanks!
Try adding the following command to your Dockerfile:
thank you @Ondřej , made the change but still continuing to get the same error:
I have looked through the org.osbuild.bootupd code and can see the args passed to the command but i'm unable to locate where the
rpm -qf
is coming from or what exactly its attempting to accomplish, e.g. what file / package is it trying to verifyseems this is where the
rpm -qf
is from: https://github.com/osbuild/osbuild/blob/291f5cc29ed29ce4c16209c8befb5b0755371f3f/stages/org.osbuild.rpm#L237-L255GitHub
osbuild/stages/org.osbuild.rpm at 291f5cc29ed29ce4c16209c8befb5b075...
Build-Pipelines for Operating System Artifacts. Contribute to osbuild/osbuild development by creating an account on GitHub.
@Ondřej @αchilleaς do you know if there is a way to use anaconda to deploy a local oci-archive from the local filesystem? (via kickstart)
ostreecontainer --stateroot="bazzite" --remote="bazzite" --no-signature-verification --transport="oci-archive" --url="bazzite.tar"
does not work.
I've tried a few different iterations.
Code is up for Net Installer. Workflow is hard coded to build bazzite:latest. Hoping to improve that by either doing multiple workflows or something. I'm too tired to work on it anymore tonight: https://github.com/noelmiller/ublue_installerSo updates.
@Skynet (and I guess me 😂) have been working really hard on an offline installer. We have an idea of unencapsulating the OCI image into a OSTree repo, but we are trying to track down what might be causing us to have non-ostree layers.
Here is the error:
You need to create a OSTree repo with
ostree init --repo=ostree/repo
first to test this.
Bazzite has 3 non OSTree layers, main only has 1.
Upstream Silverblue uses a OSTree repo as it's install files for the offline installer.
Because it's supported in anaconda.
Only other option I can think of is we have to wait for upstream anaconda to support OCI archives and make those available in the image to deploy from.Fixed. There's a label that needs to be added with the last diff-id. I got it added using my workflow for one tag. I just need to loop over the other tags
You rule!
So you got it to unencapsulate?
I got it to stop saying there were non-ostree layers, which when we testing things earlier, that was the only error. I'm running a workflow now that should unencapsulate the image I built into the runtime image. I copied the Fedora Lorax templates and then removed whatever wasn't needed
so
I have amazing news.
we are very close.
we are testing out a couple more methods.
but realistically, I found what BIB is doing with their kickstart to deploy from an oci-dir.
so I just need to figure out how to set up proper partitioning.
and updating the rpm-ostree remote.
which can be solved using the bootc command that bib is using.
but that would require us to ship bootc and have it work properly 😄
unfortunately the problem I haven't solved is installing flatpaks during install.
I wouldn't worry about those for now
we can always replace what we have and let the exising service units do their thing and it'd still be a huge win
and then sort the flatpaks later
sounds like we are about done 😉
yup, we've got a working protoype.
just need to fix the remotes thing.
with Lorax, we can build any variant we want FYI.
as long as the ISO image is small enough to fit in Github releases, we can host it there for free.
also I might need some design help as we could theoretically theme anaconda to use the bluefin logo and bazzite logos for those images.
just have to figure out what files we would need to overwrite which would be pretty trivial.
this is the closest we have ever been.
just a few remaining things that we would've needed to sort out with BIB.
I'm testing building bluefin now.
I've gotten silverblue-main to install.
with the caveats that it's pointed to the wrong ostree location.
I'm working on fixing the remote name. As for fixing the logo's, there's some rpm packages that lorax installs for those logos, the best way to do it, it is create your own rpms.
I'm also working on some Makefiles to make local development of the ISO easier
yeah. generating in stuff through github VERY slow.
takes about 15 - 30 minutes each time we try to build.
depends how github workflows are feeling.
@Kyle Gospo looks like you have some work to do 😉
also I am very much OK with using the workaround that BIB is using right now for fixing the remotes issue:
just requires us to install bootc in our images.
which we wanted to do anyway
ISO for bluefin is 3.5GB in size.
I'll test bazzite in a bit.
I think the oci-dir is compressing the size of the container image, so I think bazzite won't be much bigger.
@Noel if you want to recopy my branch, I have the Makefile working, which makes local development easier. Just install make and then when you're in the directory just delete deploy.iso before running it again. It will skip most of the build process that doesn't need to be redone. You can override any of the variables by setting them after the
make
command make image_name=bazzite
. I also ended up switching from oci-archive to oci as well. When it's set to oci-archive, it extracts the archive inside of the live filesystem, so I figured I might as well have it already extractedYou should probably add a variant variable like I have in my branch.
So we can build using Silverblue or Kinoite.
Some of our images are based on Kinoite and this is important to have the installer be based on that because Silverblue by default does not allow you to add a user.
Because gnome takes care of that.
Added
so
got os-installer building
have a copr available here shortly
we may not need it.
we are in the final stretch.
doublechecked, we have bootc in the images, it's just an older version from the fedora repo, we might need to copr or kyle it.
we may not need it.
I'm going to mexico for a week on friday so I probably won't be around for the iso party lol
I'm thinking we may have something in the next hour or so.
standby 😄
we have signed-images working now.
!!!!
yup.
we tested an update after the install and it works.
WHAT
and the remote is set fine?
yes.
we're just sorting out some stuff for the PR.
so what you're saying is that we're done.
yup, we're just sorting through some stuff for variables to use our existing matrixes.
the only other question would be can our ISOs fit in a Github release.
@Kyle Gospo E Z
they may possibly.
we have a Team plan so it's 4GB
also, main and the other images are pretty small isos, so we can publish the others as releases, just not promote them.
so like, we can have an ISO for silverblue-main, kinoite-main, etc.
the file limit is per file, so we can have as many as we want
we meeting in voice tonight? I have questions!
I would like to, but I'm taking the kids to basketball.
What time are you thinking?
I may be able to jump in briefly.
just whenevs
could be post-kid bedtime
Are you available now?
I have 15 minutes.
bootupd's
bootupctl ...
command runs rpm -qf
to find the version of certain packages that should be in the image (grub-tools and shim I believe).
the URL probably needs a file://
prefix..? I should check how we do itI figured it out.
You guys use an oci-dir
that's true, we do!
:D
For folks interested, try booting from the ISO created by this workflow and see if it works to install to a VM.
https://github.com/JasonN3/container-installer/actions/runs/7909030556
OK.
you need to specify UEFI for it to boot.
This is a feature
Don't fix this
LOL
Dead serious, if you fix this I want the option to unfix it
Legacy mode is terrible for games
OK. we won't fix it 😄 less work for @Skynet and myself
🎊
@Kyle Gospo @j0rge
I just checked and my install was on BIOS not UEFI
huh
something is fucky then...
what version of virt-manager and qemu are you using?
I'm retrying with less memory. You use 4G and I was using 8G
OH
weird...
why would bios mode require more memory?
I'll give it a try quick
I wonder if we could do something that would restrict to UEFI only.
or at least specifically for bazzite
nah that's fine
no need to make extra work
increasing the memory in bios mode fixed it.
we will want to inform folks of minimum amount of memory.
and for bazzite specifically UEFI mode.
okie dokie.
so...
we fix up the workflow
and... we're done? except maybe branding?
and using the web version of anaconda?
both of the above not strictly neccessary.
I think so! Already much better than what we have
next things to look into would be new anaconda web UI
and live install for virt keyboard support
so...
I think I have a plan for live installer.
I will need to transplant a lot of what we have done here, but just use livecd-creator instead for the workflow.
it will result in a much larger image, but we could host that somewhere for steam-deck and handhelds.
it will still be an offline image with all the stuff we've figured out.
4G of memory with BIOS worked for me
I just need to retool to use oci-dir
WEIRD....
uhhhhhhhhhhh...
I wonder if it's not predictable.
we are running the image and it's loading stuff into memory right?
so depending on the amount of memory anaconda is already using, maybe it's running out when trying to copy over the files from the oci-dir?
no one should be running 4GB of RAM on a laptop anyway
especially for gaming and development.
I wish I had some old physical hardware around I could test on.
anyway, I'm gonna quit for tonight while we are ahead. @Skynet I owe you some pizza or beer or something.
I think you're both owed pizza beer
there's some quirky things specifically with qemu-kvm and memory settings... giving a VM 4GB doesn't mean the running maching sees 4GB... some of that goes to the host system ... i wouldn't worry about it
I’m heading back to US tomorrow
Hope to start taking a closer look at all this next week
🇳🇿 🛫 🌊 🛬 ⛰️ -> Boulder Colorado
Bazzite works
@j0rge testing it now:
not sure how theming works for webui, but it may be easier to theme now if we use this instead of the original GTK based one.
kyle has an item on that
it's like standard web stuff
was it as easy as the docs implied? new package + a boot arg?
yes.
though my local build environment is being goofy.
well then, it looks like you are almost done!
you can run docker and podman at the same time right?
we do that in bluefin?
I'm trying to replicate what Github is doing in my local environment.
yeah
There's a random extra build that gets triggered, but all the ISOs in the matrix I set got built
https://github.com/JasonN3/ublue-os-main/actions/runs/7918715775
GitHub
build isos · JasonN3/ublue-os-main@af10251
OCI base images of Fedora with batteries included. Contribute to JasonN3/ublue-os-main development by creating an account on GitHub.
amazing!!!!!
This allows the build process to be one repo (container-installer) while the actual runs are triggered from each of the individual repos (main in this case)
SO AWESOMEEEEEEEEEEEEEEEEEEEEEEE
Thinking about it a bit, this repo is pretty generalized, so others can definitely consume it to build their own ISO images.
Build general and use forever
so what do we need to do, update the isogenerator action to do this?
where's the iso building part in this repo?
I wouldn't mind seeing if we can get some theming done using the webui anaconda if it works properly.
yeah kyle wants to do that
agree 100% bazzite gamer install go nuts.
yeah, something is fucky with my local build process.
oh iso's branch
https://github.com/JasonN3/ublue-os-main/blob/a4f980982fa04f29c11c91d8a7172fe99c8161ec/.github/workflows/build.yml#L259
This is the line that triggers the build
GitHub
ublue-os-main/.github/workflows/build.yml at a4f980982fa04f29c11c91...
OCI base images of Fedora with batteries included. Contribute to JasonN3/ublue-os-main development by creating an account on GitHub.
I tested main just now using the web installer and it installed perfectly.
beautiful!
I just need to add it as a default grub boot argument.
anyone have a sec to hop onto voice?
I can.
omw
GitHub
GitHub - ublue-os/isogenerator: An action to generate custom ISOs o...
An action to generate custom ISOs of OCI images. Contribute to ublue-os/isogenerator development by creating an account on GitHub.
GitHub
GitHub - JasonN3/container-installer: Creates an ISO for installing...
Creates an ISO for installing a container image as an OS - JasonN3/container-installer
make image_name=bluefin version=gts
?Yeah, that should work
libdnf._error.Error: Failed to download metadata for repo 'fedora': Cannot prepare internal mirrorlist: Status code: 404 for https://mirrors.fedoraproject.org/metalink?repo=fedora-gts&arch=x86_64 (IP: 140.211.169.196)
my ISO didn't work
GitHub
anaconda/docs/drop-in-scripts.rst at fedora-40 · rhinstaller/anaconda
System installer for Fedora, RHEL and other distributions - rhinstaller/anaconda
Building bluefin: https://github.com/JasonN3/ublue-os-bluefin/actions/runs/7920916103/job/21625756152
GitHub
build the isos · JasonN3/ublue-os-bluefin@562912d
An interpretation of the Ubuntu spirit built on Fedora technology - build the isos · JasonN3/ublue-os-bluefin@562912d
GitHub
feat: add anaconda-webui package by noelmiller · Pull Request #9 · ...
This is for testing out and seeing if we can use the new anaconda webui by default.
https://github.com/ublue-os/isogenerator/issues/2 @Noel @Skynet IT WORKED!!!!!
only issue is the remote was assigned the number and not the tag
mm marinara pepperoni beer
7-stethoscope-gremlin-galveston-preclude
Warp ISO build ^I'm working on the
gts
image tag problem in the isogenerator... just giving a bit more time, i'm seeing the bigger pictureWhat is gts supposed to stand for? The actual letters
"grand touring support" - basically behind fedora but ahead of LTS
Ok. Thank you
lts
-> gts
-> latest
is the final endstate. lts being the centos-derived ones which don't exist yet
which would make to el9
-> 38
-> 39
. And then on release days we adjust the other tags to point to the right things
gnome-software and co are still not OCI enabled so that's why we make a rolling tag, otherwise people would be stuck on that image unless they know how to rebase via the CLI.Yeah I get up the same thing, but on an ISO I built for https://github.com/ii/corehost
GitHub
GitHub - ii/corehost: A Fedora CoreOS image with some development p...
A Fedora CoreOS image with some development packages built in - ii/corehost
@j0rge hoping to simplify the testing process for us: https://github.com/ublue-os/isogenerator/pull/4
GitHub
feat: Adding Dockerfile to simplify development by noelmiller · Pul...
I want to create a dockerfile to simplify development and also allow folks to run it on their local machine using docker.
I saw!
I'm testing it right now to see if it works on my machine 😄
how's it working?
You have to turn SELinux off. I need to squash a bug to get it to output to the correct directory. Otherwise, as far as I can tell it works.
ack, I'll keep my ubuntu server handy then
I figured out my problem.
I'm bad at using git commands 😄
and it didn't show the error that was happening in the docker build.
either way, once this is merged, I don't need to do a git checkout anyway.
What are folks thoughts on me publishing the docker file to a container registry?
then we could add a ujust command to build the ISO.
would be cool if I could find a way to have the ujust command detect what OS you are running on and then it will build an image for whatever OS you are currently on by default.
you could override it with variables of course 😄
I suppose I should try to figure out what is causing the SELinux error before we publish a ujust command for building the ISO.
@j0rge @Kyle Gospo so once I fix the tags issue, all you will need to do after building the docker container and disabling selinux is this command:
once I get an action created to push to the ghcr registry, you will just need to pull the isogenerator container and then do the run above 😄
Can someone with admin add in the protected branch info for isogenerator?
@j0rge @Kyle Gospo ? Otherwise I'll see if I can do it in a bit.
Update: I cannot. I will need an admin to do it.
On a plane
Just merge for now, no worries
@Noel i see a few gotchas on your PR for the image_tag fix... want feedback in the PR or shall i add commits?
Go ahead and commit
Can we do apache license on the repo so we match the rest of the org?
Sure
Unified workflow for building the container images: https://github.com/JasonN3/build-action/actions/runs/7935694913/job/21669394557
I based it off bazzite and then tested it using main and bluefin. I need to check through the rest of the repos to see if there's anything else crazy done in them, but this is the first step towards getting all of the images built in a standard way and a single place to edit that code
GitHub
Unified workflow · JasonN3/build-action@17a7a04
ISO Builder. Contribute to JasonN3/build-action development by creating an account on GitHub.
we're doen now
lolz
@j0rge your issue is closed. I will validate when the next ISO builds in the workflow. It's doing it now.
https://github.com/ublue-os/isogenerator/issues/2
@bsherman you rock, thanks for all your help!
Can we please stop merging into main without code reviews? There are now things in main that are kinda wasteful in runtime
I'm not able to turn on branch protection or I would
but yes, we can review
Can we chat in a bit?
Sure. Just need a minute
no worries, in a meeting, but I'll be free in like 10 minutes
Try now
I messed up the perms on creation with approvers
I'm free. Feel free to reach out through discord or wherever.
@Noel I'm in Office Hours
docker run --rm --privileged -v ./output:/isogenerator/output -e IMAGE_NAME="bluefin" -e VARIANT="Silverblue" -e IMAGE_TAG="gts" ghcr.io/ublue-os/isogenerator
Hey... I'd jump in Office Hours to catch up, but I've got to step away from the desk for a while.
Appreciate the comments... I'd like to see quite a bit of improvement regarding use of code reviews and cleanup of code in all our repos. However... the de facto pace right now is bit... aggresive.
Happy to have you contribute to improvement in that area.
yeah. Maybe when @j0rge gets back we can schedule a meeting for us to attend to discuss code reviews and code cleanup?
I think it would be really good to get everyone on the same page.
everyone is on the same page, use branch protection. 😄
I've turned it on in the right repo
maybe for that issue, but we should have a tactical discussion on code clean up stuff.
Updated the tracker issue: https://github.com/ublue-os/main/issues/468
GitHub
Tracker: New Universal Blue Installer · Issue #468 · ublue-os/main
Spec Document https://hackmd.io/sNczIQz-SKau6oKUS6187Q?view Purpose The current install method is either using the current ISO which is unreliable due to connectivity requirements or rebasing from ...
Across the entire org.
@Skynet branch protection should be in place now
Sorry that took so long
Trying to build with
and having troubles likely cause it's a sub path in the registry image name.
Anyone else run into anything?
Could you post a pastebin to the exact error? I'm not sure CoreOS is a valid variant name for Lorax to use.
these lines
I've updated it to only publish to ghcr.io/ii/corehost:stable and it's resolved the above errors.
Thanks for checking that. Could you open an issue on https://github.com/ublue-os/isogenerator
Opened an issue for you @Caleb
https://github.com/ublue-os/isogenerator/issues/12
Please review this response and let us know if you have questions: https://github.com/ublue-os/isogenerator/issues/12
Thank you. I was afk but ya bet me to it!
needs some space! (kindly requesting reviewing)
https://github.com/ublue-os/isogenerator/pull/13
GitHub
add step in iso workflow to remove unused software by BobyMCbobs · ...
free up space as some builds require having as much space as possible
@j0rge @bsherman @Kyle Gospo @EyeCantCU @Skynet Please review https://github.com/ublue-os/isogenerator/pull/7 when you have a moment.
Still mostly afk so whatever the consensus decides. Can't wait to play with it!
Left a few comments. Otherwise it looks great 🙂
All of your comments have been resolved. If you can please approve so we can merge, we would really appreciate it!
Approved 🙂
BEHOLD, a new age!
@Kyle Gospo FORWARD.
"FESCo and QA will help arrange mass-testing early in the F41 cycle, shortly after F40 GA." @Noel for webui. So probably in the summer sometime we might be able to pull it in
https://bugzilla.redhat.com/show_bug.cgi?id=2263964
"To Stephen's point, it's worth emphasizing that with the current design, this "instant-apply" UI is the only UI offered for doing a classic "resize a Windows install and install alongside it" operation, in the new webui workflow. The "guided partitioning" workflow that exists alongside the "custom partitioning" workflows in gtkui no longer exists as a separate thing; if you need to free up space, this is the only in-line UI we offer to do it."
this doesn't even affect us
if the installer is a webui, maybe we can just remove those buttons from the UX?
cc @Kyle Gospo
Yeah that's what I was saying, I'm all for shipping that early
It's not just this that's holding it up.
it's also setting the proper rpm-ostree remote.
haven't gotten that to work with the web-ui yet.
Right, that's the missing post step?
yes. It doesn't seem to run the %post step in the interactive-defaults.ks file.
web-ui probably uses the configuration file with dropin directories.
oh ok.
@Kyle Gospo can we make it so a sad trombone sound plays instead when you click the button?
asking for a friend
sky's the limit
@Kyle Gospo On the isogenerator repo, can you remove merge queue, it removes the ability to squash the merges and it makes the commit history really ugly? Also, can you pleasse add that the review approval needs to be on the latest commit so someone can't get approval and then make some unwanted changes that still get merged
yep on it
@Skynet should be good now
Thank you. Do we want 2 reviewers required or still just 1? I see it's 2 now
I swapped to 2 to match main and others
if that's too much I can reduce it back down
your choice
Ok. Matching is fine
Glad it works! I'm going to close your issue then
so after doing some testing, we found some issues on install: https://github.com/ublue-os/isogenerator/pull/17
This PR grew a little bigger after finding bugs.
@Kyle Gospo @j0rge could one of y'all approve: https://github.com/ublue-os/isogenerator/pull/17
GitHub
feat(ci): Added ability to run the workflow manually and fix: gener...
This will allow us to run the workflow manually without having to push anything to main and fix variable expansion
@j0rge
docker run --rm --privileged --volume .:/isogenerator/output -e VERSION=38 -e IMAGE_NAME=bluefin -e IMAGE_TAG=gts -e VARIANT=Silverblue ghcr.io/ublue-os/isogenerator:38
go ahead and try it. should be working now!Not Jorge, but got curious. Are there any pre-requisites to running this? Got an error after all the RPMs were installed
What was the command you used? and when did you try to run it?
We just merged VERY recently with fixes.
Ran it about 10 minutes ago, and the Docker command you shared above
First log entry was 22:45
Try
sudo setenforce 0
to set SELinux to permissive mode.
the actual builder in Github uses Ubuntu and doesn't have that problem.What's the log entry "selinux is Disabled"? Is that SELinux inside the Docker container?
on your host.
it needs to set up a loop device.
that's in the container.
Ah, that makes sense
yeah. I'm gonna try to figure out why the losetup issue is happening with SELinux enabled.
because the funny part is, my SELinux is enforcing right now and I'm able to build ISOs...
so something is funky with docker.
Podman doesn't work very well at all with loop devices which is why we recommend using docker.
but yeah, give it a try and let me know if it works!
It's gone past the point it failed at
Fans started spinning top speed
Something's happening...
Pulling Bluefin
No dinosaurs were harmed making this iso
ISO generated 🎉
Does this require EFI boot?
It shouldn't!
Well, I used UEFI for now, and Anaconda has loaded. Going through installation now
Is this now an "offline" install? I know stuff like Flatpaks probably won't be present, but the base OS is installed without network?
If so, this is a huge win for the handhelds because I often see people having network issues with those during install
Yes, this solves so much jank. The live rebasing can go away
And I'm in 🎉
We need to replace the iso in bluefin release page as soon as we can and get testers
The flatpak stuff we still need to figure out
@Skynet will be working on the release function for Bluefin tomorrow.
Dope!
great work all!
🚀
@j0rge @Kyle Gospo I remember hearing you guys saying how the old isogenerator is linked to the release-please action. Could you point me to how and where that link is established?
GitHub
bazzite/.github/workflows/release-please.yml at main · ublue-os/baz...
Bazzite is an OCI image that serves as an alternative operating system for the Steam Deck, and a ready-to-game SteamOS-like for desktop computers, living room home theater PCs, and numerous other h...
has the old diff
OK, you removed it! lol
I was like...
I don't understand lol
yeah, at some point we gave up on that ISO and then we made the other offline one.
there's been like 3 attempts at this. 😄
hopefully this will be the last attempt 😄
@j0rge @Kyle Gospo I need someone to approve: https://github.com/ublue-os/isogenerator/pull/21 fixes: https://github.com/JasonN3/ublue-os-bluefin/actions/runs/7992427172/job/21827154414?pr=1
@hippiehacker come get your ISO friend!
README ^^ of that repo has docker instructions, just run it on an ubuntu machine you'll have a full offline ISO that installs the signed image directly. It's not the live usb stick you want, but it's a huge improvement and a step in the right direction.
And also that means we could use your etcher pro to get usb sticks to @Kyle Gospo and @Noel just in time for SCALE
that would be huge
💯 what’s the timeframe?
We should start another thread about Cloud Native Kids Day : https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/kids-day/#workshop-information
LF Events
Kids Day | LF Events
Join us for an exciting and educational experience at Kids Day during KubeCon + CloudNativeCon Europe 2024. Designed to engage young minds in the world of technology and innovation, Kids Day offers a…
https://www.socallinuxexpo.org/scale/21x
march 14-17
@Kyle Gospo / @Noel let me know the configuration / image you want for SCALE and I’ll donate the first round of sticks and send them to you once we’ve tested.
I'm paying for the stickers but I only have bluefin ones
Just a thought on the ISO Generator action
You're already pushing the isogenerator Docker image, so can't we just use that in GHA and remove the make commands in action.yml?
Use a container-based reusable action and pass in the required arguments, as you do when building locally https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action
Use a container-based reusable action and pass in the required arguments, as you do when building locally https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action
I'll let @Skynet answer this one. We went back and forth and the reasoning I got was troubleshooting the action would be easier with the individual make steps rather than using the container.
I prefer the make commands because it does make it easier to debug. However, the more I work on it, the more a container starts to make some sense, so I'll give it a try on a branch and see how it works out. It does look like GH will build a new container image each time, so that may slow down the already really slow build process
@Kyle Gospo I'm around all day if you need a hand with bazzite ISOs, standing by.
we could always optimize later
@j0rge can you confirm the exact images you want ISOs for? Reason I ask is Kyle told me that he was deprecating a few images that were in his massive list, so I just wanted to make what images we should be building for Bluefin and Bluefin-DX. This is useful information for @Skynet to know.
bluefin, bluefin-dx 38(gts) and 39(latest), total of 4.
Ok.
How easy is it to get ISO verification working?
The default grub entry wants to verify the image before installing, but fails Note: This may already be fixed in ublue images. I pointed the script at my custom image and that error showed Edit: Created issue - https://github.com/ublue-os/isogenerator/issues/23
The default grub entry wants to verify the image before installing, but fails Note: This may already be fixed in ublue images. I pointed the script at my custom image and that error showed Edit: Created issue - https://github.com/ublue-os/isogenerator/issues/23
File an issue at https://github.com/ublue-os/isogenerator Unsure on that.
@Skynet we may want to consider removing that grub entry until we can get it working.
It's easier to just add some junk files so it passes than to remove that entry
I'll have to look at what's in that service to see how to get it to pass
looks like ventoy worked, so that's a nice problem fixed!
You mentioned the speed is quite slow. It may help to swap out Podman for Skopeo when splatting the image onto disk as I'd imagine skopeo installs far fewer dependencies
It probably would. I'll look into it when I have time. https://github.com/ublue-os/isogenerator/issues/24
Say what now?
Oh your draft PR
Yes it would. We use the podman save command to create an OCI DIR that is copied to the ISO. That's how we are doing offline deployments.
I'm unsure if skopeo has the capability to create an OCI dir the way podman does.
Looking at the docs, it doesn't appear Skopeo has the capability
working!!!
I'll run the action again to see what happens
testing the outputted dx ISO first
Sweet! Do you plan on setting retention periods on the objects or overwriting the previous ISOs to keep costs down?
Or are the costs low enough to keep everything?
Or are the costs low enough to keep everything?
we could keep everything but we have different buckets for bazzite and bluefin
I'm going to shoot for latest only and yolo from there
but I want to test what happens when it's midupload
There's a CloudFlare setting which removes objects that fail in the middle of an upload. It defaults to 7 days
Oh, see what happens to the existing ISO mid upload of a new one.
right, that's what I meant to explain but failed
ok trying it now
then I think what we do for bluefin is put this ISO build and push at the beginning of the release-please action so it builds and pushes to R2, and THEN the thing makes the changelog and does the release after the artifacts are built. 😄
Sounds like an easy change.
@Kyle Gospo @j0rge we should decide on this once we have everything the way we want.
newly pushed dx ISO installed fine!
testing what happens if we run the action again
as I hoped, the new upload is a new multipart upload and then replaces the existing one once it's complete at the end
@p5 I think that's what we're looking for right?
Excellent! Yes, that's the one. And the setting I mentioned above will delete the partial object after 7 days
Want me to give that ISO a spin?
fucking beautiful
yeah
https://download.projectbluefin.io/bluefin-latest.iso
@Noel @Kyle Gospo ok all ISO work done for bluefin. I'll sit on the ISO announcement until I get the ack for Bazzite.
GitHub
feat: Remove rebase step now that ISOs are nearly ready by KyleGosp...
Move topgrade configs to /usr/share as these aren't really mean to be changed directly and are fatal if missing.
Move polkit rule to /usr/share for similar reasons.
Adjust README to have recent...
Ready as long as we can get this merged/tested
and @Noel is the PR ready to merge at bazzite?
I can do that rn
Remove the
push:
from the on section of the build ISO workflow and make sure the isogenerator version for the action is updated to make sure it builds with secure boot support. You can look at Bluefin to see what I mean.
Also I opened it on Test branch for Bazzite, so you may need to cherry pick it to merge into mainhttps://github.com/ublue-os/website/pull/714 got the website ready
done & merged, ty
Adjusted the link to bazzite & approved
merging
goodbye image website!
GitHub
feat: Remove rebase step now that ISOs are nearly ready by KyleGosp...
Move topgrade configs to /usr/share as these aren't really mean to be changed directly and are fatal if missing.
Move polkit rule to /usr/share for similar reasons.
Adjust README to have recent...
Last thing to review and merge ideally
But we can launch without it
SHA256 support PR is up: https://github.com/ublue-os/isogenerator/pull/30
GitHub
feat: add creation of sha256 checksum by noelmiller · Pull Request ...
Purpose
There is a desire by the team to include a sha256 checksum with every creation of an ISO.
What I did
Added bash command to create sha256 checksum after ISO creation is completed
Tests
Feel ...
@Noel
We likely need to set screen rotation args based on hardware, but it works for me on an LCD deck
Testing OLED
How did you flash it?
fedora media writer
OLED deck is dead, test momentarily
Interesting. This is really good news!
I feel so relieved that at least the LCD version works.
inst.resolution=1280x800
this gets set in the old isogenerator on deck images
be super nice if we could make a script that does it based on dmi name
but if we can't, just doing that on *deck images is enough
Did you wipe your USB before you flashed it? Maybe I fucked up by not doing that.
nope
had a windows 11 installer on it
plugged in, replaced w/ media writer
booted
Test OLED here soon
Oh, did you flash from a windows machine? Or use the flatpak version of Fedora Media Writer?
Flatpak
previously used to fix my gf'
's surface laptop
hence the win11 installer
Ok. Just trying to narrow down the differences.
You should try from your dock as well as that's where I ran into the error.
You plugged directly into the port on the deck?
can do that next
yep
The resolution thing is an easy fix.
Do we need to do anything like that for the ally or other non steam deck images?
if we can detect hardware, be nice to set the resolution based on hardware for things like the LGO which are similarly rotated wrong
if we can't, 1280x800 is basically universal
We probably could figure this out, might take some work.
How early do we need to detect?
grub?
yeah grub in this case
Here's the way I fixed mokutil
since it's bash I just grab dmi name and then check for Jupiter or Galileo, then exit
ps, P5 has some good looking PRs open as well
for making ISOs smaller
Yeah, this would be prior to install. We would have to figure out a way to get the anaconda OS portion to detect on boot.
users gonna use
@Noel
again fucked res but we know what to do
test on a dock next
It's me though!
I'm the user!
everyone is a user at 3am
@Noel no display cutover oddly
But that is the definition of working
@nickname ^
I will test again using potentially a different dock. Maybe my dock is not good for this. Or maybe I should try a different USB.
Though I'm less sus of the USB given that it boots my laptop.
What dock are you using @Kyle Gospo ?
I'll just order one of those.
It's some knock-off piece of shit, I wouldn't spend money on one
I got it before the official one was even out
And then put a 3D printed shell on it
Ahhh ok.
Honestly the official dock has a lot of problems too.
Gonna jump in voice.
Gonna get a PR going for the resolution fix.
ill add it to the next buzz
might have some stuff going on the next couple of days but ill try to do this because it's kind of fun and i need this nerdy fun or i will go insane
I'm hoping to have a fix out of the deck resolution issues by end of day.
I will likely push that around 9PM PST
since that's roughly when upstream builds
so we can kick off a new main w/ new mesa
sounds fine by me.
@Kyle Gospo thanks for that revert, my nvidia builds are cranking out now. 😄
does
-ally
need to be remade?
wait is the ally 1080p?
yeah nevermind, i dont think they need to be remade@Kyle Gospo jelly? my biggest image is 5.4GB
wait until we get the webui anaconda is ready with the flatpaks. bluefin will still be much smaller, but it won't be 5.4GB anymore 😛
hah indeed.
Maybe adding a squash step before extracting the OCI image could reduce the size?
Do we add the flat packs to the ISO and pull them over?
Not yet but we definitely want to
@Noel since it's my PR I can't do any approvals
so if anyone else can take a look it'd be appreciated
Yeah, I added a lot of different folks that can review.
can you help me get a PR going for the EXTRA_BOOT_ARGS?
yup
sick.
I'll be in chat.
cleaning cat litter, soon as I'm done I'll join
then we can SOT too
I'd like to be done working on Ublue for a bit 😄
At least a day, give or take 😛
@Kyle Gospo @bsherman @j0rge https://github.com/ublue-os/isogenerator/pull/34
Our run for fixing Bazzite Steam Deck resolution in the installer didn't work last night. This should hopefully fix up the action to allow it to work this time 😄
done
Any ideas where we can save a few GB on the runners? Wanting to test out https://github.com/ublue-os/isogenerator/pull/35 but running into "out of storage" errors
GitHub
chore: squash image before extracting to filesystem by p5 · Pull Re...
A test to see if this works, and if it reduces the ISO size
I'm not really sure why you're running out of space when it currently doesn't run out of space in the non-squashed version?
I might have an idea how to free up storage space, but I too am unsure why this is using more storage
The existing step that clears storage doesn't look like it works
updated version for Bazzite: https://github.com/ublue-os/bazzite/pull/810
just saw a quick thing to do which has high value
https://github.com/ublue-os/isogenerator/pull/38
GitHub
chore: add checksum signing by BobyMCbobs · Pull Request #38 · ublu...
sign checksum to verify it's integrity
Tried installing via the ISO from this run
https://github.com/ii/image/actions/runs/8040660146
and getting the error
The following error occurred while installing the boot loader. The system will not be bootable. Would you like to ignore this and continue with installation? failed to write boot loader configurationI can continue but it just won't install the bootloader. anyone gotten this before?
GitHub
chore: update isogenerator version · ii/image@a6503a0
Contribute to ii/image development by creating an account on GitHub.
the warning?
oh
here's a photo (one sec... format)
This usually means that you are having an issue with deploying your container image.
It's not even getting to that step.
could you get the anaconda logs?
it gets past deploying image from local path and goes to installing bootloader
What are you doing for partitioning?
will try when I boot it up again
no changes, just what it suggests (partitioning -> done)
got to step away but will return!
sounds good! I'll download your image and poke around a bit.
also figured out an issue with the deck for booting our ISO
the deck needs to be in XHCI USB mode in order to boot it properly.
DRD will not work.
XHCI should be what it defaultly ships with but mine was not shipped with that setting.
This is the error I got in a VM
FileNotFoundError: [Errno 2] No such file or directory: '/mnt/sysroot/boot/efi/EFI/fedora/grub.cfg'
Right, yeah I'm installing with QEMU + libvirt
Trying to install the bluefin-latest ISO in the same environment and if it fails I'll checkout the Anaconda logs
Huh. Bluefin installer works fine. Good to know
These are the logs from the Anaconda tmux (?) any other useful places to be able to provide logs?
not sure where the line is
https://github.com/rhinstaller/anaconda/blob/master/pyanaconda/modules/storage/bootloader/bootloader.py#L142
I know the version running doesn't match the version on the master branch
GitHub
anaconda/pyanaconda/modules/storage/bootloader/bootloader.py at mas...
System installer for Fedora, RHEL and other distributions - rhinstaller/anaconda
Also @Noel if you're still stuck on those 2 items, just file issues with the details and tag em help-wanted
no reason to suffer alone
the flatpak and the language one?
yeah, I can do that.
yeah
Might have a potential solution for the losetup issue we've been getting in docker.
I'm gonna test that sometime today.
i'm testing a PR for it now 😄
you are wonderful. 🙂
so... one thing i noticed.. our policy, meant to envforce verification of signatures for ublue-os rpm-ostree native containers impacts this 🙂
oh?
i realized with the inclusion of loop devs IN the container image... we may be able to bypass the need for docker and get back to podman
that would also be good!
or you can use either!
sure, i just would prefer to not REQUIRE docker
right!
eh, doesn't work 🙂 still needing docker... not trying to solve that now, was just curious
I think I may submit a ticket upstream to the podman team on it.
I think they would like this use case to work for us.
and if I say "Look what docker do that podman cannot, they will probably want to look at it :P"
Eh. I think we could probably figure it out if we dig deeper.
I haven’t run podman as root privileged yet for example
oh as sudo?
yeah, I suppose we could test taht.
*that
right. it’s not apples to apples comparison unless we run as host root
do we need to sign the image?
😂 not until it runs in podman 😉
but yeah, it's probably more correct to sign it with our cosign key
so the action does sign it: https://github.com/ublue-os/build-action/blob/main/action.yml
GitHub
build-action/action.yml at main · ublue-os/build-action
ISO Builder. Contribute to ublue-os/build-action development by creating an account on GitHub.
or is supposed to.
OH
nevermind.
we don't use that
and...
this does.
mmm... true
we're only signing the image if it goes to the main branch:
which is correct
can I bypass the policy?
with podman? just for testing?
I could care less if the PR one is signed.
man. that's too much to have in a single action, really hard to read
It was the solution Jason came up with to try and centralize all our container builds into one action.
right... i think a shared workflow would be better
in some ways yes.
in some ways no 😛
lets debate that another time
fair
i don't believe the cosign signing happened... even in the isogenerator main branch, because I couldn't pull that image with podman
also, why do we need versioned isogenerator images? 38/39/40?
cool
lol
who cares if the generator image is running fedora 36 or ubuntu:22.04 or alpine... it's just supposed to do a job
you can manually modify your /etc/containers/policy.json but... once done, it may not get updated by image updates
ugh...
yeah.
there's a few different things which have been going on which make me think we really need to revisit that policy
trying podman with sudo
saw the update to PR.... thanks for testing that! I mean, could have been a distinct PR, too 😂
So Jason has a theory it might not be the mknod command, but that it's because we are running the entry point as a shell script that it's setting up the environment to work with the loop devices properly.
I'll test removing mknod now and see if that's what's going on.
k, happy to have you verify
nope mknod is required.
ok
New feature in upstream (https://github.com/JasonN3/build-container-installer): multi-language support
Is this the line that fixes it?
Also I see that yours is showing server variant rather than Silverblue or Kinoite, not sure if that matters.
Yeah. That file overrides a macro that rpm uses to identify what locales to install. The default is all, but that file sets it to english
You can also just sync the upstream into your repo instead of recreating all of the PRs
@Noel On the ISO workflow refactor to include the build outputs, are you wanting me to change from the single
rclone
command that copies the entire ./end_iso
directory to R2 to point to the individual files? Or simply replace the hardcoded ./end_iso
string with the output-directory
output?
I.E.
I would say the single output-path because it's less transactions to cloudflare.
So something like this
https://github.com/ublue-os/bluefin/pull/982/files
(Release not yet created, so haven't tested it out)
Another PR up for reviews
https://github.com/ublue-os/isogenerator/pull/49
Deprecates two required inputs to simplify calling the action
GitHub
chore: remove ACTION_REF and ACTION_REPO action inputs by p5 · Pull...
I don't think we need to pass these in. By calling the action in the parent workflow, the code is all available on disk already.
Have left the inputs there but included a deprecationMessage ex...
@Robert (p5) New release with all your PRs should be up.
Thank you so much for all the excellent work you have been doing! It's really appreciated to get some of these items addressed so the action is easier to use!
@Robert (p5) could you take a look at this? https://github.com/ublue-os/bluefin/issues/986
Ideally it would be good to have tests for building the ISOs to make sure the workflow works as intended before we merge the changes into main.
created an issue for Bazzite as well.
You happy with a checkbox on the workflow_dispatch that either uploads to R2 or job artifacts?
Oh, ISO builds whenever the workflow is changed, but do not publish. That's probably a better solution
And will stop me from needing to do this
https://github.com/ublue-os/bluefin/pull/982/commits/878d5ba1ce64642d61b3c157c644d9e4e045c7bb
Will merge the version bump first, then will look at that
yup! I bumped the version.
https://github.com/ublue-os/isogenerator/releases/tag/1.0.9
@j0rge trying this to see what happens 🙂
docker run --rm --privileged --volume .:/isogenerator/output -e VERSION=39 -e IMAGE_REPO=quay.io/centos-bootc -e IMAGE_NAME=centos-bootc -e IMAGE_TAG=stream9 -e VARIANT=Server ghcr.io/ublue-os/isogenerator:39
see if it works as is.
Also I learned https://weldr.io/lorax/livemedia-creator.html has the ability to spit out qcow2 images as well as AMIs.
so there may be some functionality with Lorax that might be able to do that?Aren't both of those already fully functional with osbuild?
probably? it's what bootc-image-builder uses for their backend.
Ah, I didn't know osbuild depended on it
other way around. bootc-image-builder uses OSbuild as their backend. 😛
Think I'm getting confused.
So BIB depends on OSBuild which depends on Lorax?
So BIB depends on OSBuild which depends on Lorax?
oh, that I don't know about OSBuild.
Yeah. I know about BIB and osbuild, so I was trying to say why would we want to use Lorax (which IMO is far less intuitive) to spit out AMIs and qcow2 images when we could use OSBuild
this unfortunately didn't work, but I bet if I add an empty grub config it will work lol
@Kyle Gospo @HikariKnight TOUCH GRUB...
I'm hopeful bootc-image-builder will be the answer long term for us.
or just using OSbuild rather than lorax directly.
lmao! seriously that issue struck again?
I think it's because bootc images don't include a grub config is what I'm guessing?
most likely
still funny
Bluefin PR (including build outputs and PR tests)
https://github.com/ublue-os/bluefin/pull/982
GitHub
chore: refactor ISO build workflow to use isogenerator outputs by p...
Refactors the workflow to use the action outputs rather than hard-coding the ./end_iso directory.
Also moved the environment variables from the rclone step to env arguments instead of defining them...
oh yikes. unless it skips the upload ISO step, you are building ISOs and pushing to prod.
Tested that in this workflow run, and it works as expected
https://github.com/ublue-os/bluefin/actions/runs/8102307614/job/22144480339
OH, i see the if statement.
And to make sure, I changed the rclone command to an echo when making the PR
lol sorry, was sweating, didn't want to upload stuff until Jorge is good with that, it costs money for us to upload ISOs 🙂
Thank you for all the work on this.
The only thing I don't know about is skipping the container build steps. They are required workflows, so without them passing, we can't merge the PR.
I know how we can stop them from running if only the ISO workflow is changed, but the PR will still require the container build steps to be successful before allowing us to merge
I think it would just be adding build_iso.yml to the list of ignored files? https://github.com/ublue-os/bluefin/blob/f237cefd309202d958705dd9d5ae42f6076a16b4/.github/workflows/build.yml#L10
Apologies, I updated the message
I know how we can stop them from running if only the ISO workflow is changed, but the PR will still require the container build steps to be successful before allowing us to mergeThe issue is container builds are a required check in the repo. If we skip them, the branch protection rules will never be satisfied on these ISO PRs
@j0rge @Kyle Gospo thoughts?
container images should not be required to be built when merging changes to the iso_build.yml workflow.
I think ISOs should be made and uploaded as artifacts to validate the workflow functioned as expected, but no reason to require container images to be built.
I don't disagree, but the primary purpose of the repo is for the container builds, so it makes sense for that to be the required check
yup!
I'm fine with fully manual for ISO builds on my end
no need to automate it
so I'm for sure the wrong one to ask :p
(We're automating ISO CI not CD)
we're talking about not requiring the containers to build when merging changes from the iso_build.yml workflow.
right but who cares
because if any of the containers fail to build, then we have to rerun them or force merge.
like, github will care but we don't need to block work over it?
Your wallet if we go back to paying for builders 🙂
just cancel the container build if you need to rev fast
They have to succeed in order to merge....
right, and then we override because surface is broken, that's how we've been doing it
Ima be away for a few hours. Will catch up later
don't worry about the checks we can work around them
wouldn't CI be an action done in isogenerator
and not in things consuming it?
then it can run every time there's any change
Bazzite doesn't force me to build the containers anytime I'm doing work in build_iso.yml workflow. It makes no sense to build the containers each time I'm trying to merge is what I'm getting at.
then send the PR?
OK, I'll figure it out.
I'll do it
sec
oh, the repo structure is different
this is about having an additional check in place to attempt to build attempt to build the ISOs before pushing them up.
I'd like to see them build if we make any changes to the action before we merge in use of a new version of the action.
I may not be explaining this very well.
yeah and I'm not sure we need to fix the actions right now
just force them through, we know surface is broken already
Like I get it's important but if it's in the way of better demos then I say table it.
right now the brew experience and shortcuts etc. are broken and that's what I'm prioritizing
It will simplify development for the future. I don't need to force things through or ask you guys to force merge things. I can put together a PR of what needs to be done.
I just wanted to get these items in from P5 as a quality of life improvement to the action.
the new version of the action is done, I just want to make sure if we use the new action version in Bluefin and Bazzite, it's easy to rev 🙂
Either way, don't sweat it, we can wait.
yeah sorry I'm in the middle of fire
also, for later
there is a fire?
we need to find a way to archive at least a few sets of ISOs in like a backup bucket in case we gen new ones and they have a regression
sorry to put on additional stress, was not something I was intending to push hard.
was thinking like a monthly cron to copy them into a backup bucket
sounds good to me!
should be pretty easy with Rclone.
Same Bazzite PR
https://github.com/ublue-os/bazzite/pull/829
Advise on https://github.com/ublue-os/isogenerator/issues/45
Are we happy with an approach like this
https://github.com/rsturla/eternal-iso-generator/blob/main/action.yml#L49-L76 ?
I think everything in the action runs in a normal Fedora container, so just replacing that container with isogenerator would do 90% of it.
I can take a crack at the refactor sometime next week, it's not a high priority, just something that bugs me.
The main thing I am wanting from this issue is for the workflows to not need to put stuff like
container: fedora
and --privileged
. This can all be done inside the action, and the user calls the action like any other.
This also minimises the chance of another action being incompatible with a Fedora containerWe run it in a Fedora container due to needs for Lorax.
Yeah, that's completely fine. My point is if we use a
docker run
command inside the action, the user can call it like any other action. If we use a container action or container workflow, the user needs to be aware of it and adapt their workflow accordinglyOh, so you're saying do the work only in the container for the work that needs to be done in there?
Not wrapping the entire action workflow with a Fedora container?
That makes sense
The entire action doesn't need to be in a Fedora container.
And that is one of the benefits of us building the isogenerator container in the first place 😄
Why might the ISOs generated for Cosmic not allow us to create the users during install?
I must be doing something wrong, as I thought the installer would be identical, no matter what image I am trying to install
You'll want to use Kinoite for the variant to create a user.
because gnome doesn't require a user to be created, the silverblue variant doesn't make it available in anaconda.
Ohh, understood. Thanks!
@Robert (p5) I found the problem with cosmic not installing properly.
we need to use Fedora 40 installer for it to work.
the problem is that our container files are not being built properly for isogenerator
It is only using Fedora 39 and is not taking anything from the build matrix.
I'm going to deprecate the build-action repo since it's not being used anywhere else at the moment and try to correct it.
https://github.com/ublue-os/isogenerator/pull/53
@Robert (p5) now that the container builds are fixed, go ahead and try making another cosmic ISO using the container and see if it will install properly. I'll take a look at the action next and see what might be causing that not to work in your draft PR.
Sweet! Will give it a shot.
I tried F40 installer first, but switched to 39 because the user creation options weren't there, but this turned out to be something completely different
For cosmic images, it only works if the ISO version is 40
Got to take the dog for a walk so will investigate myself in a little bit
Replace 39 with 40 for the tag for isogenerator.
Isogenerator needs to be running in a Fedora 40 container.
For 40 images to work.
you should also set variant to Kinoite
command I am trying now:
docker run --rm --privileged --volume .:/isogenerator/output --pull always -e VERSION=40 -e IMAGE_NAME=cosmic-silverblue -e IMAGE_TAG=40-amd64 -e VARIANT=Kinoite ghcr.io/ublue-os/isogenerator:40
@Robert (p5) I made some good progress in this PR for adding flatpak support: https://github.com/ublue-os/isogenerator/pull/56
GitHub
feat: Add flatpak support by noelmiller · Pull Request #56 · ublue-...
Purpose
This PR is to add installation of flatpaks as part of the ISO build process. This is something that Fedora does for Silverblue and Kinoite, so we are utilizing the templates they use to bui...
If you have the cycles, the action and test workflows need to be fixed up to use the container file rather than the individual make steps. We wanted to plan on doing this anyway and will need to due to checking out submodules.
For the testing workflow, we probably need to combine the build container workflow and the build test ISOs workflow because the build ISOs workflow will be dependent on the build containers one now.
Should you just move back to the upstream repo since there's multiple PRs in isogenerator that are just copies of the code I'm pushing into the upstream?
you don't include several features in upstream that we need for isogenerator. Secure Boot stuff is a must for us.
I'm fine with Secure Boot being added. It's generic right now, it can be added. Submit a PR and I'll merge it in. What other features are missing? I've got half your issue list either done or about to be done
EXTRA_BOOT_PARAMS need to be included for building deck images.
specifically for setting grub options
Ok. That's easy to do. I can knock that out in a few seconds. I'll even get that added today
you may have already merged that.
secure boot is the big one.
I just checked, I don't have that yet. I'm also going to do it where it modifies the resulting grub file instead of the templated grub file
Again, submit a PR and I'll approve it
OK, give me a bit.
I don't have perms on the repo.
do you want me to fork?
Yeah, normal PR stuff. Create a fork, create a branch, add your code, submit PR
I have a branch ready to go, I just can't push it.
Okie dokie.
What branch? On a fork or where?
I used to have access to create branches on container-installer.
I no longer have that access.
I cleaned that up so you'd stop getting a few dozen emails a day from build attempts
X_X I did all of my commit on the actual repo. I'll go ahead and create a fork.....
You can edit the .git/config file to point to a fork. There's some git commands to do it too that I don't know, but they just edit that file
it's fine. I'll move my files I needed to change.
Ok. That works too
need to fix hardcoding in order to allow the action to run in a fork.
https://github.com/noelmiller/build-container-installer/actions/runs/8158493674/job/22300620244
Submit the PR to the upstream repo not the fork. The workflow has some hardcoded values that I need to eventually replace, but GH doesn't have vars for some of the items, so I need to recreate some of the vars, so until then I just hardcoded the values
need you to approve the workflow run.
Running
It shouldn't need it for future runs. It's just first of new person
failed to push the container: https://github.com/JasonN3/build-container-installer/actions/runs/8158627798/job/22300999045
probably because I don't have rights on the repo.
You shouldn't need it. It uses the token's permissions. I know I ran into that issue with the isogenerator repo. Give me a minute to remember what I had to do
I can jump in voice in about 40ish minutes. I have a meeting from 9:30 - 10:00
I added you back as a collaborator so you can push a branch from within the repo. There's something GH is blocking when it's a PR from a fork. I may need to change the packages. I'll keep messing with the fork you have
ok, I'm in voice. you want me to just PR directly on the main repo then?
OK, PRed directly.
I'll join a minute. I'm in a call
Awesome!
Apologies I've not been around much. Getting quite stressed at work and need to take a step back from everything for a few days There have been some layoffs and our workload has significantly increased.
Apologies I've not been around much. Getting quite stressed at work and need to take a step back from everything for a few days There have been some layoffs and our workload has significantly increased.
no worries, we are going to deprecate isogenerator and use @Skynet's upstream repo
Hopefully more action on Thursday, stay tuned! We are close to figuring out installing flatpaks as part of the install process. We figured out how to fix the remote to flathub, but the current install lorax template Fedora uses for building Silverblue doesn't resolve dependencies, so it just attempts to the install the flatpak without dependencies unless you specify the dependencies manually.
Almost all the features are merged in from our Isogenerator
I figured out what was wrong with the code I was working on. I just need to make a small adjustment when I have time (probably tomorrow while at the airport) and then flatpaks should work with dependency resolution
Awesome! I'll poke around tomorrow at getting the container dir stuff sorted.
how are flatpak repos stored in the system
couldnt ublue just bake in flathub by storing the repo file in some /usr directory
they are in /var/lib/flatpak by default
Some ostree OSs store it elsewhere. Anaconda has a built-in way to deploy flatpaks during install, so we're using that
i see
If you want to see the current progress, it's here: https://github.com/JasonN3/build-container-installer/pull/32
repo file is in /usr/etc on bluefin & bazzite, and we ensure it's installed via setup script
ty for linking it
Ok, here to test today. I'm going to setup a decent size box for building. I'll record and document my progress via stream / and possibly the Office Hours channel if that's ok.
@Skynet / @Kyle Gospo should I start by trying to build this branch on an existing ublue system?
I'd like to try and help integrate flatpacks
There is a lot we need to convert before flatpaks can happen.
Flatpak support needs to be done first in https://github.com/JasonN3/build-container-installer
GitHub
GitHub - JasonN3/build-container-installer: Creates an ISO for inst...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
And then I need to switch all our repos that build ISO images to the new action and remove related flatpak services from the container files.
Skynet is out of town at the moment, but we are hoping to return to this tomorrow.
If you are under a deadline @hippiehacker, the current ISOs we are using are fine, they just require Internet connection to install all the flatpaks.
If you can wait, I'm hopeful we can get everything sorted with flatpaks by the end of the week? Really depends how much time @Skynet has since he is currently driving that PR.
Ok
Maybe today I'll just focus on local commands I can run to generate ISOs from a fork/branch of ublue. My target it to create bootable images, past the ISO, possible using bootc if that's getting closer to possible.
GitHub
GitHub - ublue-os/isogenerator: Creates an ISO for installing a con...
Creates an ISO for installing a container image as an OS - ublue-os/isogenerator
have you played with this yet?
Nope, I can focus on that, if that's the suggested way forward..
I've lost track of all of this
lol
I'd rather focus our efforts on https://github.com/JasonN3/build-container-installer
GitHub
GitHub - JasonN3/build-container-installer: Creates an ISO for inst...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
We have several PRs we need to merge in. One of them is for bootc support
If you can wait, I promise it will be worth it.
Ok, will focus on general process: is there a way to cancel the refresh-md automation on install.
$ rpm-ostree install virt-manager error: Transaction is progress: refresh-md You can cancel the current transaction with 'rpm-ostree cancel'Trying to setup virt-manager to connect to beefy VMs I cancel it, but then it starts the refresh-md process again I cancelled enough to kill it! 🙂
You have to kill -9 that thing like 60 times, I remove this from bluefin entirely.
rpm-ostree install?
no, it's gnome-software's rpm-ostree backend doing a metadata refresh at boot.
Oh ok.
which I remove because gnome-software works much better when it's only doing flatpaks and no rpm-ostree things
Yeah, we want to steer folks away from doing rpm-ostree anyway.
yeah it''s one of the worst parts of the vanilla fedora experience
this is like one of the first things we fixed
it triples the amount of time you need to test things
because it can lock you out for a solid 5m
jeez.
yeah, that's not great.
how
want to remove it too
whats the package name/process of doing it ?
@Noel someone's pinging on mastodon, we fixed the english-only aspect of the installer right? kb and lang should be setable now? isogenerator changelog seems to say that, just confirming
Yeah that's fixed
And our ISOs already have it
yeah that's what I figured, just doublechecking before I respond
yup, should be fixed.
Targeting next week as the time we switch to https://github.com/JasonN3/build-container-installer and deprecate isogenerator. I wouldn't mind getting some help updating documentation to point to this repo for both our recommendations about generating main images and use of the action in general.
Definitely will need to do some testing, but I feel confident it should work fine. Also the bonus of installing flatpaks as part of ISO installation is in the works, and should hopefully be ready soon!
we will just need to include all dependencies needed, but it will be a huge step up from what we are currently doing!
well, I want to test out the new iso right now. So data backup time!
Flatpak support added to main branch: https://github.com/JasonN3/build-container-installer
action: jasonn3/build-container-installer@main
container: ghcr.io/jasonn3/build-container-installer:main
GitHub
GitHub - JasonN3/build-container-installer: Creates an ISO for inst...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
What's the easiest way to test if we wanted to build our bluefin variants?
Depends on how you want to do it. You can setup a GH repo that will use the action to build the ISO and upload it as an artifact you can download. Or you can run the container on whatever to build the ISO. You can even use the make commands to build the ISO. The
make install-deps
command will install all the necessary packages. It kinda depends on what you're used to and how repeatable you want to make it. The container will probably give you the best balance of quick and dirty and not loading up every package on your host machine
The main variables you'll need to define are the image_* vars. The rest of the defaults should be fine unless you want secure boot
If you want to install flatpaks, the var is FLATPAK_REMOTE_REFS. I forgot to add that to the README. An example is app/org.videolan.VLC/x86_64/stable
what,s the separator for flatpaks?
a space
Sounds great! @j0rge @1/4 Life I will start working on converting our actions to use this one this week.
Bazzite Nvidia iso worked great. Hi tpm issues weren't there. It pointed to the new version after the first boot. Pretty stoked
awesome!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Thanks for the great work with the new build-container-installer ... testing flatpak installer (vlc is a my initial test) ..how can I fit the launch path as
/app
doesn't exist? (fyi in NZ timezone)^^^
buiilt image with ...
sudo podman run --rm --privileged --volume .:/github/workspace/build docker.io/heyste/build-container-installer:latest VERSION=39 IMAGE_NAME=kinoite IMAGE_TAG=39 VARIANT=Server FLATPAK_REMOTE_REFS="app/org.videolan.VLC/x86_64/stable runtime/org.kde.Platform/x86_64/5.15-23.08" FLATPAK_REMOTE_URL=https://flathub.org/repo/flathub.flatpakrepo
built an image so I got all the latest updates from the last week 🙂does
flatpak list
show that the flatpak is installed?heyyy, does anyone know which kernel is used for the live system in the generated iso? is the latest one pulled from the repos or is one of the older ones from back when 39 was released used?
i suppose this pertains to kickstart behaviour
figured out what exactly is responsible for building the livecd and the utility is called lorax, which pulls the latest rpms from repos
yes, the ISOs are built using Lorax.
The command should be the path inside of the flatpak. /app exists inside of the flatpak fileystem and not inside of the OS filesystem
Yeah, it should always be the latest in the live repos. None of that gets hard coded to specific versions
@j0rge @Kyle Gospo Do the base images that Bluefin and Bazzite use have any flatpaks installed on them that we remove as part of first run?
I know stock silverblue comes with several flatpaks.
yeah whatever it comes with I blow away
iirc when we force remove the fedora remote it removes all of them
OH.
OK, that might be a problem, but we'll see.
you probably got the same error
I did not.
huh
let me rerun mine, but success!!!!!!
NICE
So most of the packages it is pulling in on an update are locale packages.
that sounds right?
that seems to make sense to me
I can add those as part of installation?
there's an upstream thread about locales
they're trying to figure out how to do it
It also pulls in Mesa, HEIC (which I couldn't install), and openh264
but it might also be "put all of them on there"
honestly.
I think the simplest solution is to tell people to update their machine with topgrade on first boot.
The ISO is going to be old anyway if we don't generate them often.
yeah this is why I was thinking a flatpak update at the end of the yafti screen
because those will be diffed
that would also work!
like instead of flatpak install everything just flatpak update -y
Just have to make sure to tell people to use yafti.
well, you don't want to include all flatpaks shipped in Yafti do you?
it launches by default and you can't quit it until it ends.
no, those will be optional on top.
ok perfect.
like if you just click next, next, next, close we'll make it do an update
if you through and pick chrome, edge, etc, then it will do those + the update
we could still ship flatpaks embeded on the image early if we just tell folks to update their box on first login.
unless you want this to be part of the yafti rework.
then we can wait to ship it.
what do you mean?
we don't need to tell people to update on first login, yafti will do that for them
So I have flatpaks installing right? One issue is that it doesn't pull in all the dependencies, so you need to do a flatpak update to get all them.
Yafti doesn't have the functionality to do
flatpak update -y
yet?it's a custom command we can change it
OH ok.
That's fine then. I didn't realize we were going to do that outside the rework.
I'm cool with that.
Let me test things in the action next.
I do find it curious that you got that error on installation.
I'm curious that maybe I was connected to the network and you weren't?
https://github.com/ublue-os/bluefin/blob/main/usr/libexec/ublue-system-flatpak-manager
we replace this with just flatpak update
yeah I'm making a new iso
I think I might have tested the one where I removed the kde runtime
starting from scratch
It would not have built if you removed the wrong one.
scratch the network thing. It's not even an option to connect to a network during installation.
This is also an option!
I was thinking I could do that as well.
I'm gonna remove VLC as we won't be shipping that package anyway and start working on the action.
ack
now that I validated it works.
If you find out all the dependencies for the final image, you can have them embedded in the ISO. That removes the need to run
flatpak update
on first run and makes all of the flatpaks available in the offline ISOI tried installing one of the dependencies and it said it wasn't available from flathub for install.
not sure how it's pulling certain things in.
That makes no sense since it has to come from flathub. What was the dependency?
@Noel ok so yafti crashed right on boot, but all the flatpaks are there! We'll likely need the yafti fixes pushed so that part works out
HEIC org.gnome.Loupe.HEIC/x86_64/stable
unless it's a runtime and not an app.
gave me an error that it couldn't find the package during lorax build.
ok, same thing as you, it got a bunch of locales on update, other than that, it's awesome!
It's probably a runtime. I can't find it if I search flathub's website. I can find Image Viewer, but that's just org.gnome.Loupe. If you use
flatpak info
, you can see the ref, which includes app or runtimeOK, yeah, it's a runtime.
OK, I can include all the locale packages as well.
yeah, most of these are runtimes.
weird, Yafti didn't crash for me.
my window popped open and then closed immediately
interesting.
doing a new install to grab those extra dependenices.
nod I can run it again after work with the locales if you want
I'll give you a command in a bit to try.
GitHub
Move hardcoded flatpak remote to configuration by jkonecny12 · Pull...
Without this change Anaconda always set the remote after flatpak
deployment to
name: fedora
remote: oci+https://registry.fedoraproject.org/
that worked until now. However, uBlue[1] want's to u...
garret just sent this to me over mastodon
oh I see you commented. 😄
I also already added in a condition for it. When 41 gets released, it will need to be tested, but it will at least remind me of when it eventually is released
https://github.com/JasonN3/build-container-installer/pull/62
GitHub
Prep for Anaconda 41 by JasonN3 · Pull Request #62 · JasonN3/build-...
Starting with version 41, Anaconda will add a configuration variable for flatpak_remote. This already gets the code in place so it doesn't get forgotten when Fedora 41 comes out.
truly horrendous looking:
Might be nice to have something that parses a list you provide to the action?
It'd be nice to just have flatpak.list in the repo
you'd need to include runtimes too, but yes, you could write comments which would be nice.
I bet we could do something like this as an improvement.
Like.. create a bash script to parse the list and then have the list be the input with spaces in between each entry to the action.
We did talk about adding in the ability to create some files. One for each flatpak and the within the file you could list the package and dependencies. If you want to throw in a feature request, I can do that next. I'm almost getting 40 support working. It's just doing the final tests right now
Sounds absolutely wonderful. I will get on that now.
Opening the FR I mean.
FR added
@j0rge missed 1 dependency.
but no updates were required beyond that one:
This command will get you a fully offline Bluefin Testing ISO with no requirement to update flatpak:
we probably still should do a flatpak update after Yafti runs since we are not publishing ISOs every day.
@hippiehacker you might find this interesting.
Important note: This is from the bluefin testing branch.
it's not merged into main.
Once this feature is in, the docker command should look a lot cleaner: https://github.com/JasonN3/build-container-installer/issues/65
Looks like we will need to do some fixes for yafti:
👀
flatpak install
does allow me to install these packages:Probably just need to fix what yafti is referencing.
we will probably want to disable the fedora remote as well
we can do that with a service file like we currently do
@Noel As long as the test pass, how's this look. It adds in an option for FLATPAK_REMOTE_REFS_DIR, which gets sorted (which is supposed to remove duplicates) and then appended to the FLATPAK_REMOTE_REFS list
https://github.com/JasonN3/build-container-installer/pull/66
so flatpaks would live at the root of the repo? Curious how it would use it in the docker container as well besides the action.
You can put it wherever you want. That's what FLATPAK_REMOTE_REFS_DIR does. You tell it where the files are stored. For the container, you just mount a volume into it and then you point FLATPAK_REMOTE_REFS_DIR to whatever directory you mounted it as
Oh I see, we should probably get an example documented for that.
It's built into the repo. The test uses it
The idea is sound though, it will be much cleaner!
right, but an example docker command wouldn't hurt in case someone doesn't want to use the action.
I'm good with whatever, just thinking what other people might need.
If I add an example for everything, I'm going to end up with 90% of the README being examples
OK, just offering a suggestion.
If it ends up being enough of an issue, I'll create a wiki and put all of the examples in there
that's probably not a bad idea.
but yeah, we can just wait.
NBD
Alright, I'll wait for the tests to pass and then I'll merge everything in
Thanks dude! I really appreciate all the work you are doing! This is going to be awesome. What is your thought about putting out a release after I do some more testing?
would be nice to get pinned to a release rather than using main.
As we discussed, when you are ready to merge in your PR to one of the UBlue repos, I'll create a release and then you can replace main with that release
perfect!
I'm hopeful there shouldn't be any more bugs or feature requests needed to get this cooking in Ublue.
That's why I'm waiting till you have everything working so I don't have to create multiple releases in the same week
yeah and then all yafti needs to do is run an update on first boot, and since the ISOs will generally be fresh it's just a quick update instead of a full flatpak install so the UX should be nice and clean
@j0rge
I can do that in the service unit too until Yafti is ready for prime time.
if we want to merge sooner rather than later.
I will need a service unit to disable the Fedora flatpak repo anyway.
@NerdsRun you mentioned that you had fixed the "update instead of reinstall" while we were hanging last night, how far do you think you're away from cutting a release?
Is there any harm in leaving it enabled?
prioritization of what is installed from when using a
flatpak install
command.
I don't want folks to have 2 different remotes and having flatpak be confused which one to use.yeah speaking of we still have a user remote we need to get rid of too, ideally we just want flathub system in the gnome-software dropdown and nothing else
I'm pretty sure flathub should get prioritized since it's higher in the list. It should also have newer versions if fedora's repo is just a snapshot
unless we can do prioritization as part of the installer, but it's already set to 1 after the fact.
I don't think the one from the installer comes with the user remote as an option. I could be wrong on that though.
yeah, also we haven't looked at the service units that do all this stuff in quite a while so we're due for some housecleaning
like, if we can delete a bunch of that stuff ... that'd be a win
Alright, I guess it isn't. Have you checked if it's there during the post actions? We could just add that if FLATPAK_REMOTE_NAME is not fedora, remove it
it's baked into the silverblue image I believe.
which is what bluefin is built off of.
It's technically possible that lorax is adding it as part of install?
I could check in a container quick.
It was originally removed as part of the flatpak-manager in bluefin, but your Flatpak installs are happening before this service is ran
https://github.com/ublue-os/bluefin/pull/1022/files#diff-55e37d8d1335069562ee9b2df6a929841bcebde12175b64900fbeab019b7f5dcL15-L19
I'm downloading one of my builds. I can then check after it's done installing to see if the file is there. If it is, I can just add a post script
it may not be in the base images, not really sure on that.
yup, we're talking about if it's possible to remove that remote as part of install so we could just get rid of the flatpak manager service entirely.
Do you want me to add an issue for the enabled Fedora remote?
Yeah, might as well track it
Sounds good. So I did not see any remotes in the bluefin container file.
which leads me to believe it's probably part of installation somewhere.
It's probably buried somewhere in the code. It's going to be a lot easier to run
flatpak remote-delete
instead of finding it in the code and removing it
You could check the PR that was for adding a configuration item for the flatpak repo and see if they found and removed that section. If not, that might be something to bring upI can take a look into that.
yeah, it's buried.
I found it. It's not part of the installer. There's a service
flatpak-add-fedora-repos.service
. I'll just add a post-script to disable it if the wanted repo isn't fedoraoh excellent!
Makes sense why I didn't see it in the container.
Both flatpaks from the build are listed
@Noel @Kyle Gospo @Skynet ok so I'm thinking this as a plan for bluefin for the ISOs since noel has them working in
testing
.
we could just turn off yafti for now in testing
so we can test removing/updating the flatpak service units.
like we can just comment out the lines that install it for now
clean up the service units so they're not installing the flatpaks. Then we'd want to decide if we want to do the flatpak update in there or have yafti call it
then when yafti is ready we turn it back onI can also just repurpose the service unit until yafti is ready.
to just update right?
flatpak update -y
basically?https://github.com/ublue-os/bluefin/pull/1022/files#diff-55e37d8d1335069562ee9b2df6a929841bcebde12175b64900fbeab019b7f5dc
Once this merges: https://github.com/JasonN3/build-container-installer/pull/66 I can remove the line to disable fedora repos.
hah yeah this seems like what we want!
oh would you mind adding thunderbird? It's already set in the dock layout, just needs to be installed.
Discussing with @Skynet, we could redo our workflow with some additional checks to combine the build.yml and build_iso.yml workflows. There is some intelligent things we could do there regarding releases so it won't build an ISO everytime something merges in.
but when we do a release, it will build a fresh container off the latest changes and then use that to build the ISO.
yeah that would be baller.
so they will be coupled, but decoupled from each other at the same time if that makes sense.
also would make testing easier.
I won't make this as part of the same PR though.
yeah and also I messed that up in the past and confused the release-please action lol
but it would be nice to be like "ok time to release", merge the PR and then have one build + ISO build in one go so someone doesn't have to sit there babysitting it
only annoying part I could see here is if one of our containers fails to build, it will fail the whole workflow.
surface energy
but maybe we can do some intelligent stuff to fix that where it will release the ISOs it can or something.
I dunno.
would be great to get something like this in our own repo to test the ISOs as well: https://github.com/JasonN3/build-container-installer/blob/f4fd87855a472f41f80ae8e2ed8e78807925761d/.github/workflows/build-and-test.yml#L135-L260
not sure how it would work with systems that have a DE.
still 700MB under windows 11 iso size even with the offline flatpaks
great bang for the buck. 😄
yup!
sure. I can add that.
ryan sipes is a big bluefin fan so having them featured on the dock has been something I've wanted to do for a long time in bluefin
If it fails, just re-run the failed jobs. There would still be individual jobs within the workflow. If it's failing because of something within the repo, it should have been caught while it was still in main and shouldn't just be getting noticed when it comes to a release
alright, standing by to merge when someone tells me it's ok.
I can handle merging into test 🙂
I haven't opened a PR from test to main yet.
oh
YOLO that thing then. 😄
I guess more so what I'm talking about is occasional failures with kmods and such.
but I suppose that's outside the purview of doing a release anyway.
and rerunning the job would work in that case.
Yup. Just rebuild the kmods and then re-run the failed jobs
It would be nice to automate the rebuilding of the kmods so those failures are less likely to happen
Yeah, I'm hopeful we can solve this in a different way by having our own repos (or mirrors) rather than being completely reliant on RPMFusion and Copr.
you know what is amazing about this whole thing
even when it gets to the flatpaks and goes through them all
it still doesn't update the progress bar. LOL.
Take that up with Anaconda. I can't do anything about that lol
hah yeah
I wonder if that's just because ostree doesn't output progress
Got nothing to show
yeah that's an ancient bug
The flatpaks are installed as a completely separate step. It would have been nice if Anaconda at least showed some progress change between steps it knows about even if it doesn't know the progress within the step
ok so, this time running it through the latest build noel kicked off. yafti didn't crash
the software center whines that you didn't enable third party repos (I thought we turned that off?)
Actually, we probably don't have a way to remove that from the first run wizard thing in gnome anyway can we, best case it's a no op for us right?
You can just not ship the gnome wizard
And use a Kinoite installer
Try a bazzite gnome iso
ok, looking
new DDR5 gets here any minute so hopefully I won't be such a mess trying to help with testing
my connection to cloudflare is not good today, it'll be a bit until I can get this bazzite image
I think it's fine to ship the gnome greeter.
Unless you are saying it will re-enable the fedora flatpak repos, which in that case, nty
@Kyle Gospo @j0rge I am going to be AFK for most of this evening. I may be able to resume work on this late tonight, but it's unlikely. I'm hoping to finish up a lot of this tomorrow in Bluefin.
honestly the ISOs for scale are already on the sticks, no need to try to shove it in there, take a scale breather
main items to finish:
1. Add Thunderbird and dependencies
2. Create directory with individual files for each flatpak and their associated dependencies
3. Update the workflow to use flatpak dir rather than huge line of flatpaks 🙂
I have the entire week after kubecon off, so I can be VM monkey for a while
yeah, I think hopefully next week will be the week we likely get all the flatpak stuff sorted out.
Flatpak stuff is merged in. As long as everything still works once @Noel changes the workflow to use the directory, I'll publish a release and then his PR to bluefin can reference that release
nod
going through it again, this is legit looking so good
Feel free to play around with the flatpak_dir feature now that it's available.
working on that gnome-software extra window now
oh I see what it is
it's the fedora flatpak repo
so I think once we cut that out this shouldn't happen anymore
So you're going to remove gnome greeter?
no
I mean in gnome-software
when you launch it it prompts you to turn on fedora repos, etc.
Oh
Oh I see.
Yeah, the latest PR disables the Fedora repo by default during installation.
yea, I suspect after the merge this will be fine.
other than that it's pretty much awesome once we get the yafti things in
Best part is, you can try right now! The disabling Fedora repo fix just merged.
I'm not going to switch to the kinoite installer, that just makes me use anaconda to create a user and I'd rather walk into the ocean than make people use that thing any longer than they have to
oh perfect I'll try it now!
do I need to rerun the action?
oh got it, I can run the workflow on the testing branch
Or in a docker container.
building. <c&c: red alert computer voice>
Workflow has a race condition, wouldn't use it yet.
Alright. Signing off. Be back potentially later.
❤️
I'm sitting with my dog in solidarity.
@Noel I think one of the gnome flatpaks needed a runtime bump right? I'm wondering if we can see if we can get all the gnome things on the ISO using the same runtime to save some space
Unfortunately I don't control what runtime the flatpaks use.
oh ok I can work on that then
I could try removing the 44 one and see what happens.
I just saw one of your flatpaks was using it, so I didn't dig too much deeper.
flatpak list --app --app-runtime org.gnome.Platform//44
well documented thankfully!
it's gnome logsYeah, I will be converting every app into an individual file today which will hopefully save us some headaches regarding what dependencies go to what.
I won't be so scared to edit it too, inline makes me nervous heh
And if we ever add a flatpak as part of install, it will be full of examples on what to do.
yeah
Hopefully we will try to keep that slim if we can.
I don't plan on changing it
Basic stuff to get the system up, Yafti for everything else.
thunderbird shares pretty much all the dependencies with firefox.
as far as runtime.
I successfully got it to install
@j0rge should I create one file for all the gnome apps or should I create separate files?
I think one file for all apps is fine?
sounds good.
I'd recommend one file per app. That way it's easier to know why a depency was installed. If you choose to remove an app later, you delete the file and all the depencies are gone. If you need upgrade an app, you update the entire list of dependencies at the same time
ok, sgtm
A single file looks cleaner, but it harder to maintain long term. The separate files is a little messier, but easier to maintain
does whitespace get trimmed?
when it parses it?
Whitespace is the separator. Multiple whitespaces shouldn't affect it
OK, so I could have it look something like this and it would parse correctly?
just would be nice to space out the runtimes from the actual app file
cleaner to read that way.
That should work as far as I know of. Simple way to find out is to write the file and then run
make --dry-run FLATPAK_REMOTE_REFS_DIR=${wherever} | grep REFS
and see what refs are listedrunning a build with the new folder structure for bluefin.
fingers crossed I got everything in there I needed to 🙂
Nice! Good luck
Thanks! Will be really nice to generate ISOs with the flatpaks already embedded.
Eventually. Unfortunately I just cleaned up my local branches this morning and the code to find the ostree refs from the flapak repo was in there. I don't know if I can still restore the branch on GH though
RIP
@Skynet did you do a merge recently? Looks like I found a bug?
I can send you my flatpak directory if that helps.
I already know what that error is. I'm not sure why it didn't get caught in the automated testing though
maybe this has to do with me using multiple files?
or something different?
I know your test only includes 1 single file in the directory.
It has nothing to do with the file count. I missed a
:
weird.
very odd it didn't get caught.
Yeah. Especially since it runs the flatpak code, which pulls in that template and it should have failed immediately when that template was loaded
do you want me to file as an issue or probably a quick fix that can be merged right away right?
It's a quick fix. I also just realized why it doesn't run it. I referenced the path wrong so no files were found so no refs were added
probably needs to include the full path right?
Yeah. I referenced it as just flatpak_refs, but I do need to specify that it is /github/workspace/flatpak_refs
It's building with the fix. As long as it passes and runs the flatpak section, I can merge it in
https://github.com/JasonN3/build-container-installer/actions/runs/8269307175
excellent! You're the bomb! I'll wait for that to merge and test again.
I can test the other bug with 38 now.
I'm also building that into a test too https://github.com/JasonN3/build-container-installer/actions/runs/8269035544
GitHub
Option to use Bootc to install · JasonN3/build-container-installer@...
Creates an ISO for installing a container image as an OS - Option to use Bootc to install · JasonN3/build-container-installer@e50374f
I don't have the fix in that run, but once both PR (the one for the additional tests and the one for your bug), it will get tested in each run
OK, so you don't want me to test?
Still test it. The fix is a separate PR. I'm just saying once the 2 PRs I referenced get fixed, hopefully you don't run into that issue any more
Got it
testing now.
so the main benefit of this not using rpm-ostree?
support for images like centos-bootc and such?
It will be an option to use bootc to install.
rpm-ostree
requires the container image to contain an OSTree. bootc
is supposed to not require that and you can just use a container built with dnf
I see, would bootc still install the container even if the image had an ostree? or would we want to disable bootc installation once the feature is merged?
Currently I just have it defaulting to being used right now for testing, but it's going to be disabled by default and considered an experimental feature. Until Anaconda replaces
ostree
with bootc
, which I don't know if that will ever happen, I'm not going to force anyone to use bootc
excellent.
I also found the code for the flatpak refs and I was able to restore the branch. That will shorten what I need to re-write. I'm also going to see if I can figure out why Fedora copies the flatpak repo to a separate ostree repo. If I can just copy the flatpak repo onto the ISO, that would be easier
one more question. I feel like the same steps are being run twice in the build of the ISO now?
And again:
it's running the download of the RPMS 2 times.
not sure if there is logic missing to make sure it uses the correct templates to build the first time or something?
I don't control that. It's whatever Lorax does. There are at least 2 runs for package installs. There's one from
runtime-install.tmpl
and another from the main template, whatever that one was called
and again:
look at the timestamps.
right I get that, it's running the runtime-install.tmpl twice
Is that from the same run? Are you building boot.iso twice?
same run.
command I used:
docker run --rm --privileged --pull always --volume .:/github/workspace/build ghcr.io/jasonn3/build-container-installer:pr-69 VERSION=38 IMAGE_NAME=bluefin IMAGE_REPO=ghcr.io/ublue-os IMAGE_TAG=gts VARIANT=Silverblue
also chmod will 777 everything in your volume directory on the host that you are running from when running the docker container.
separate gripe, not a big deal.I didn't notice that when you added back in the chmod line, you add it wrong. It should have been
ugo=rwX
https://github.com/JasonN3/build-container-installer/commit/54d89036fb481af6a26106e1fc760a7ace93bfd3
I'll fix that in one these PRs
Can you send me the full output? It looks like it's trying to run boot.iso twiceyeah, lemme grab it. I'll need to create a zip. Discord has a character limit
there you are!
Thank you. I also just looked at one my action runs. It is building it twice
weird.
I'm guessing it happened after you added the logic to support F40?
that would be my speculation.
Also testing a 38 image now.
no error on install which is a good sign.
yup, bug is gone with that PR.
Would you like me to submit an issue for this?
Let me see if the commit I just did fixes it
in which PR?
I had some lines in the Makefile I didn't need and an extra space right before calling boot.iso, so I'm seeing if cleaning it up helps
in the fix for the template
oh I see
Sounds good.
Not building boot.iso twice might speed up the build process some too lol
I would certainly hope so! lol
I don't think it building boot.iso twice has always been there.
I don't think so either. I feel like we would have noticed it sooner if it was
I used to look at the logs much closer before I started focusing more on automated tests
for sure!
F40 is the furthest along in your latest build.
hopefully the spacing and cleanup will fix it.
Yeah, that's the one I'm already watching
Still duplicated
The weird thing is that make will check if the file exists before it runs the target
Go ahead and submit an issue. I'll let these tests pass so I can merge in the template fix and then I'll start another branch to fix the duplicate lorax run
Sounds good.
submitted
I think I just realized what it is. I have the target lorax_repo set as a phony, which means it always has to run if it gets called. Since boot.iso has it as a depency, that it forcing lorax_repo to always be considered newer than boot.iso and so boot.iso has to be rebuilt. I just need to have it write a file and remove it as a phony and it should be fine. I'll still do that in a separate PR
Awesome! Glad you have a theory at least 😄
@Skynet I don't see the flatpaks in this build: https://github.com/JasonN3/build-container-installer/actions/runs/8269779871/job/22625934033?pr=72
Could be missing something though.
The other fixes you did in the PR appear to be working. I'm testing building Bluefin with flatpaks now from that PR container image using a folder
@Skynet Interesting. Looks like our public key is not working properly for flathub?
I can't remember where that public key is saved to check to see if it's in the VM or not.
It's in the flapak repo
Okay I'll take a look when I get home
Firefox install just fine: https://github.com/JasonN3/build-container-installer/actions/runs/8270920049/job/22630151522
Either the repo on your image is in a different place or you have an issue with your run. If it's somewhere else, just submit an issue with the new path. I'm going to rewrite https://github.com/JasonN3/build-container-installer/blob/main/lorax_templates/scripts/post/flatpak_configure to use a for loop if there's yet another repo location
GitHub
flatpak_set_repo fails to load · JasonN3/build-container-installer@...
Creates an ISO for installing a container image as an OS - flatpak_set_repo fails to load · JasonN3/build-container-installer@6ec230f
GitHub
build-container-installer/lorax_templates/scripts/post/flatpak_conf...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
The installation worked (and updates in the past). Maybe something got goofy with the multiple runs of doing the build. I'll take a look, still haven't gotten home yet.
Gonna guess sometime early next week? I am saying this a bit hesitantly (mostly due to own gtk app dev exp).
heh
The structure of the code lends itself to plugins with a little love it could be a DSL.
I'm triggered
yafti == yaml ;p
There is no key. I'm thinking there is a regression somewhere with flatpak updates configuration @Skynet
What shows up when you do rpm-ostree status? I did adjust it so the newer post-scripts directory is used starting with 39 but I wonder if I need to push that back to 40
I think I found the problem maybe?
oh, that's broken too:
Alright, so it's in 39 but apparently not used yet. That's an easy fix
I'll get it in the morning
No worries! I'll file a issue for it? Should I just say post scripts not running for the title?
also the GPG key is not copying over. Maybe line 111 needs to exist in the if statement for flatpak_dirs?
https://github.com/JasonN3/build-container-installer/blob/4ae7d1893ca91859b7169c265d5759c8b4d04171/Makefile#L105-L118
Not running in 39. If you can, test it with a 40 installer to see if 40 is using the directory or if there's a setting missing. I was trying to remove the duplicate runs of the post scripts
OK, I can test using version 40 and see if there is any difference there.
Line 140 and 143. Flatpaks are enabled (the lines you referenced). The issue is the post script isn't called because anaconda changed how they're stored and I started using the new method too soon
The post-scripts directory is apparently on the documentation going back to at least 37. However, the files have to end in ks, which I had changed to indicate they're scripts. I changed it back to ks in the PR I'm working on. I'll test a local deployment today to make sure it took
I got the vm tests to work finally. That took a lot longer than it really should have. As long as the latest run works, I'll merge that in which will fix the post-scripts. Now that the tests are working, I can also go back and add in the rest of the tests to ensure that everything stays working as features are added and bugs removed
@Noel merged in. Let me know if you run into any other issues
so fully offline installation is broken if you try to install certain flatpaks. Looks like if a flatpak depends on h264, it requires to be able to reach out online during installation. I confirmed when I attached a NIC that the installation worked from the same deploy.iso @j0rge This is probably the error you got at one point when doing your installation.
do we use this one? I've never paid attention to the cisco thing
I'll take a look a bit closer. Just want to see if the issues @Skynet fixed yesterday are working.
I assume Fedora gets around this by installing that specific dependency later if at all
It's h264 so we can make that the single use of our existing installer
or maybe we can "move" it to the yafti part.
I think that's important enough to be 100% automatic
That's how Firefox rpm does it too
oh ok
That looks like something that is downloaded as part of a script built into the flatpak installer. I don't know if there's any way around that other than excluding it from the flatpak list
Firefox doesn't have a hard requirement for that flatpak. I was able to exclude it from my dependency list and it installed Firefox during the installation. It did have internet during the process, so I can't guarantee it didn't do something I didn't plan
@Skynet rpm-ostree stuff is set properly, but it is still not copying over the GPG key for flathub
oh nevermind.
the key is there.
That's good. Was about to say that my deployment shows it there
but still getting:
@Noel I can jump in the office hours chat if you want to try to debug your instance
I can debug in a while. Just went out to grab coffee.
Yeah, I'm at SCALE, so hotel wifi has been challenging 🙂
@Skynet free now for a bit if you are
I have a few minutes
@Noel ^
@Noel I can hear you
can you hear me?
Yes
@Noel I can still hear you. Can you hear me?
@Skynet @bsherman response from BCL from Lorax about mknod: https://github.com/weldr/lorax/issues/1383
GitHub
Issues with Creating ISOs with Lorax inside a privileged Docker Con...
Hi There! I am currently using Fedora Kinoite 39 and I have been having issues running a docker container that was created that has Lorax. The problem for us is documented here: ublue-os/isogenerat...
gpg key is an empty file
that's the problem 🙂
I wonder if something is messed up here?
https://github.com/JasonN3/build-container-installer/blob/main/lorax_templates/scripts/post/flatpak_configure
Adding the gpg key manually fixed it.
need to figure out why it's not being copied correctly.
filed an issue @Skynet
Which path for the gpg key?
/var/lib/flatpak/repo
It is where it is set up by our flatpak system service sets it up and where Silverblue and Kinoite like it to be.
When I overwrote the key with not an empty file, it started working.
As far as tests, you could add the upstream Silverblue image from Timothy to test that different repo location.
I feel like it worked in silverblue at one point, I wonder if during the template Reorg is when that bug got introduced.
It's weird that it's empty.
All fixed now with installing flatpaks / building the my PoC image/iso. What's needs to be set here in
secure_boot_key_url
( https://github.com/JasonN3/build-container-installer?tab=readme-ov-file#customizing ) to support secure boot (never needed to look into this part of the boot process before). After fixing that setting and rebuilding would I need to use UEFI booting to test it ? TIAGitHub
GitHub - JasonN3/build-container-installer: Creates an ISO for inst...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
You will need to create a key and make that available in your Github repo.
When I can sit down, I can give you more info on how we do it.
Ahh .. .okay
Also really depends if your image has custom kmods or a custom kernel.
Likely your image will work with secure boot without having to install a key.
I’m on an Intel Mac trying to test @heyste images. My keyboard, trackpad, and wifi do not work yet. We figured might be a secure boot issue.
or that it was pointing to a local registry and the Mac wasn't able to locate it anymore 😞
new image I have is now on a public registry and
rpm-ostree status
shows it is signedsigned just means that rpm-ostree will expect the image to signed using cosign. You just need to generate a key, add the public key to /etc/contains/policy.json and then sign your images with the private key when uploading to the registry
The readme (https://github.com/JasonN3/build-container-installer) is a little light on the details, but it just need the public key (der format) that signed the kernel modules that will be running. As long as you aren't using any custom modules, the default key built-in should be fine
GitHub
GitHub - JasonN3/build-container-installer: Creates an ISO for inst...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
@Skynet I think would be worth exchanging notes with the guys over at ultramarine linux. They are running into the same roadblocks as we have been with anaconda and ostree: https://github.com/Ultramarine-Linux/ostree-config
met them at SCALE. They are doing some pretty cool things.
Also this: https://github.com/FyraLabs/katsu
You're welcome to point them towards my repo. I'm fine with as many people using it as they want. I also figured out a way to get PRs from external repos to work, so soon anyone can contribute to the repo without having collaborator
very cool!
I'm hoping to do more work on bluefin conversion tomorrow. Currently on a plane flying home.
I'm hoping to have the new workflow in tomorrow too. Basically, when I add a comment with a run command in it, the workflow from main gets called and then it clones the branch for the PR and uses the content for the container and ISO. It means changes to the workflow won't be treated in PRs, but almost everything is in the make file, so the workflow shouldn't need to change much. Everything else will run like normal but it will have the necessary permissions in GitHub
It looks complicated in the UI, but it's not too bad in text and it does work
yo I live for this shit, that is so clean
nice work
looks awesome!
@Skynet @j0rge @Kyle Gospo Good news is I've made great progress combining the workflow for building the iso and building the container. I think I want a bit more intelligence than what it currently has. Is that something you may be able to help me out with @Skynet? I definitely know I can reuse a lot of the code you've written already, so that will help a ton, but it would be nice to just do a pair programming session on it.
@Kyle Gospo @j0rge The more I work on the build iso workflow, the more I realize the benefit of using a unified container build action for both Bazzite and Bluefin. Having to replicate what I'm doing in the ISO build process twice makes little sense. It would be better to have a unified build action and have options in that unified build-action if there are places that Bazzite and Bluefin diverge.
sgtm, should we just make it a repo/action?
just keep the summit timeline in the back of your head pls. 😄
it would be an action yes. Ideally it would be one action for building the container and one action for building the ISO and they could be tied together every easily. Stretch goal would be to add automated testing where it would attempt to boot, install, and run tests on the VM created from the installation ISO
amazing, yeah, that sounds baller!
${{ github.event.issue.number && format('pr-{0}', github.event.issue.number) || github.ref_name }}
finally met Mo!
About to head over to the rh booth, saw dan walsh too
Why an action vs a reusable workflow?
If I remember right, an action can be used as a step while a reusable workflow can only be used as an entire job
fair, i may have asked the wrong question 🙂
i think the question i meant to ask is "what does this mean?" 🙂
I think the way I originally read it was probably incorrect, sounded like replacing the whole bluefin build workflow with a single action.
But now looking at the PR I now think it means switching from an
ublue-os/isogenerator
action to jasonn3/build-container-installer
action...
I'm still curious what is different here though.take a look at this one: https://github.com/ublue-os/bluefin/pull/1022
haven't merged it into testing yet.
benefit of this PR is it will do an end to end test when the containerfile is changed and will build an ISO you can download for it.
Your concern about leaking secrets through PRs is valid, but we already have protection because we need an approver or member to run the workflow, so no secrets should get leaked through a PR so long as the PR is reviewed properly.
using PR based containers is going to make testing SO much easier.
i'm not the one who pushed back hard on that security concern in the past, but i have relayed the concern
sounds good. I'll get concensus on that point before we move forward with it.
I think building a container and ISO per PR is a very valid way to do end to end testing.
yeah, that's a big point
i've personally been shut down a few times over that point
I don't see the security concern unless someone does something naughty with the workflow.
and we have protections in place for that.
we can also do stuff with bot commands as well which @Skynet is doing on his repo. But I want to get a POC in testing going first before I boil the ocean anymore than I already have.
This PR is going to massive X_X
the value of the bot commands is we can run portions of the workflow tests without having to run the ENTIRE thing every time.
before merging we will of course run the whole thing, but for example, if I'm not making changes to the container file, I only need to build the container for the PR ONCE.
also another thing I'm targeting with this PR is ISOs will only be uploaded to cloudflare when doing a release.
anytime there is a PR, they will be uploaded as an artifact that you can use for manual testing or automated testing (which I still haven't implemented) 🙂
at some point, I would eventually like to use this build-action for main and Bazzite. I'm targeting Bluefin first as a POC.
GitHub
[Feature request]: JSON command output format · Issue #5499 · flatp...
Checklist I agree to follow the Code of Conduct that this project adheres to. I have searched the issue tracker for a feature request that matches the one I want to file, without success. Suggestio...
I found this.
It would be lovely if
flatpak list
command output as JSON
more flatpak bs: https://github.com/flatpak/flatpak/issues/4107
When running in a container, sometimes it will install apps and sometimes it will not.
@Skynet @Kyle Gospo generating all dependencies is not something that can happen unfortunately. Containers and chroots don't work well with flatpak. Main thing is sometimes it will successfully install all the dependencies, sometimes it will not, it's a roll of the dice depending on the app. h264 specifically messes up some apps. I think the best we could dynamically generate the direct runtime dependencies by using flatpak remote-info flathub org.mozilla.firefox
. It will also provide you the ref. All other dependencies would require you to connect to the internet and run a flatpak update. I think this is unfortunately the best we can do.I wanted to do some scripting of Flatpak a while ago, but couldn't find a clean way to read the CLI outputs, so gave up.
yeah, it's pretty crappy.
I wasn't planning on pulling the output from the commands to try to resolve that. I was planning on spinning up the container, running
flatpak install
, letting flatpak decide what to install, and then querying the repo using ostree
. The issue with h264 is that it downloads the archive from cisco during install, which is an issue if you are completely offline. If any other flatpaks do the same, that could be an issue too, but again, only if you're completely offlineFlatpak install has not worked well inside of Bluefin or Bazzite for installing flatpak packages (even with setting up the temp directories). I was connected to the Internet and h264 did not want to install in my container I spun up. Which would mean it wouldn't show up in the OStree right?
which version of bluefin/bazzite and which flatpak?
Latest for both.
org.mozilla.firefox
will fail to install h264 in Bluefin, Bazzite for me got weird temp directory issues during flatpak installation even though it was making the temp directories just fine.
Todoist like in this issue will fail completely if h264 doesn't install and won't skip past it like Firefox does: https://github.com/flatpak/flatpak/issues/4107
If there was an easy way to extract the information before installation, that would be super helpful in this case, closest I've found is the flatpak remote-info
command which gives you the direct runtime the app relies on to function at all.
And all that Lorax and anaconda requires.
The frustrating thing with dependency management is gnome just dropped 46 and a lot of apps switched over to the new runtime, but not all of them.
So even managing that one dependency manually was a pain. And the best part is that it didn't flag in Lorax so I had no way of knowing until I attempted a test install.Is h264 installed? Sorry on my phone.
h264 errored, but it's still listed. It can't be put in an offline installer anyway since it needs the internet during the install process
Listed in the OStree object you are querying?
flathub:runtime/org.freedesktop.Platform.openh264/x86_64/2.2.0
It doesn't show up in a flatpak list command if it fails.
Again, I'm not using flatpak list
Not that we are shipping todoist, but could you try that as well? That one in my experience would fail to install completely.
com.todoist.Todoist
are you using a bazzite or bluefin container for this?
bluefin
OK, I'd be curious your experience on ghcr.io/ublue-os/bazzite:latest
Does the installation need to succeed at all? or does it just need to attempt it to have it show up in refs?
ok, it just needs to attempt the install, it doesn't need to succeed
as long as you pull from the flathub: list
Todoist fails for the same reason h264 fails, it's trying to run a program as part of the install, which doesn't work in a container because then it becomes a nested container
right, I guess I don't care if it fails to install, I just need it to provide the dependency list.
I was wondering if that would happen even on a failed install
gonna check bazzite and see if the weird issues I get with temp directories matters.
this at least gives me a chance of moving forward.
error I get in Bazzite:
You might need to create ome directory ahead of time temporarily. Enable verbose and try to diagnose
I think what I'm going to do is work around this by using the base containers Bluefin and Bazzite build from.
yeah, base kinoite container works. It does not like something about the bazzite container.
OK, so a base-main container pulls up the same deps even for firefox. I assume we can use any container that has the flatpak command already in it to do this
@Skynet I'm losing my mind. When I run manually in a shell in the container the commands in my entry point, it works with no issue. When I run them as part of a bash script using the entrypoint, ostree says it's invalid. I have no idea what I am doing wrong..
Containerfile
entrypoint.sh
Podman command I am running
I'm getting:
error: Listing refs: Invalid refspec repo=/tmp/tmp.46ZPLLnN6Q/repo
to be clear:
if you manually enter a
base-main
container, and run the flatpak-deps.sh
script it works or no?
I assume running the entrypoint via a container build from that Containerfile is failingthe script does not work.
but in the container the commands work
like it gives me the error listed above.
yes.
i'm looking at that
maybe it's the quotes around the $@ that is screwing things up?
hah, i get a different error
what error is that?
--repo not repo
oh man...
I missed --
oh, i need some args, like an app ref
yeah
I figured it out.
well
its working when i run the script direct, after i added
--
to repo
per @Skynetrather @Skynet figured it out. and I'm testing XD
this is why I'm not allowed to code 😛
The easy way to notice it,
Listing refs: Invalid refspec repo=/tmp/tmp.46ZPLLnN6Q/repo
repo=/tmp/tmp.46ZPLLnN6Q/repo
is not supposed to be a refspec. It's supposed to be a repo. It means something is wrong with that area of the commandcool
I got the parsing figured out.
@Skynet So basically I just need to publish this container as an action or we could integrate it in build-container-installer
It can be built into the action. A step just needs to be added before the current
docker run
step that will take in the list of desired flatpaks and then add in the dependencies, which can be done through the filesWill probably want to include the option of what container to use, but I don't think it matters. Except for Bazzite. Still have no idea why that container won't even entertain installing flatpaks.
It should use whatever is going to be deployed
I need to figure out why Bazzite is broken then. I cannot make heads or tails as to why it's not working.
So for the action, it would be like this: https://github.com/JasonN3/build-container-installer/blob/flatpak-deps/action.yml. The lorax template can then be modified to skip downloading the flatpaks and instead can just get the existing refs using the code from https://github.com/JasonN3/build-container-installer/blob/ddf7551727c0d4015f6b428f7ce93bda83da538d/lorax_templates/download_flatpaks.tmpl.in
GitHub
build-container-installer/lorax_templates/download_flatpaks.tmpl.in...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
are you free to chat for a bit?
ostree refs --repo=/var/lib/flatpak/repo | grep -E '^(flathub:)' | awk -F'flathub:' '{print $2}' | grep -E '^(app/|runtime/)'
@Kyle Gospo Need to fix a couple things to make flatpak dependencies work properly in Bazzite. the /var/tmp folder is completely gone in Bazzite which is an issue because flatpak needs that directory to install flatpaks to generate the dependency list.
we may just want to do away with some of the cleanup scripts at the end and let ostree commit handle that?sure
sounds good. I'll put together a PR after I do some testing on a local image
do away with cleanup scriptsmaybe... lets talk about what that means 🙂 specifically, where is
/var/tmp
gone?
ostree commit
will nuke everything in /var
(including /var/tmp
) but at least one time, it would complain if stuff was left in /var
for it to cleanup...
so, especially in staged builds, or in builds on top of other images which have been "cleaned"
some /var
directories have to be recreated...
thus some of the tricks like mkdir -p /var/lib/alternatives
it's not really to preserve that path in a final image, but to ensure rpm installations don't completely failif you pull down bazzite and look inside the container /var/tmp is completely missing.
i know
that's how
ostree container commit
would leave it
and in our *-main
images, the last steps done in the containerfile are
i'm not sure why bazzite is doing ostree container commit
after those mkdir
sI guess what I'm getting at is /var/tmp exists in bluefin and not bazzite. This will be an issue when processing flatpak dependencies.
"technically" my way (the main way and bluefin) is suppsoed to be "wrong" because ostree WANTS /var to be empty
its supposed to be created on boot with tmpfiles.d and stuff
i think we just need to reorder the final commands in the bazzite Containerfile
I see, so a better solution for us would be to create the file we need with the permissions we need?
we can easily do that in the action.
it's an ephemeral container for grabbing the flatpak dependencies.
it gets blown away immediately when it's done.
yeah, i don't think this flatpak thing should require
/var/tmp
to pre-exist
it's running as root, so it should protect itself
mkdir -p /var/tmp && chmod -R 1777 /var/tmp
It needs to be there or it fails.
can you not create it?
We can.
isn't that what I just said?
I guess I got confused by this statement up here saying flatpak thing doesn't need it to pre-exist. It does or it needs to create it 😛
restated a bit differently...
whether using bluefin (with /var/tmp) or bazzite (without /var/tmp) as a builder image
i would expect the script/tooling you are working with, since it already is doing some scripted commands,
to protect itself by guaranteeing that
/var/tmp
exists, regardless of the original state of the builder image
it can do this with: mkdir -p /var/tmp && chmod -R 1777 /var/tmp
sounds good
@Skynet I couldn't tell you why root is allowed to install flatpaks in an entry point, but not in a run command for docker.. I set it to system and it is installing properly now X_X
https://github.com/flatpak/flatpak/issues/3238
@Skynet
Found this in the Bazzite container:
I'm going to try removing these and install and see if it fails
so it may just be installed outside of the flatpak lol
actually, I'll just check bluefin.
flatpak by nature can't use anything in /usr
that shouldn't matter
yeah, that exists in both...
why does Bazzite work @Kyle Gospo ? 😛
sorry, for more context, installing flatpaks in bazzite container related to openh264 gives no error, bluefin and all other images, it does.
no clue on that one, but we already know openh264 downloads stuff from cisco or whatever
and should be installed after the fact
so imo let's not even investigate this and just assume it must always be done that way
right, I understand that. The issue is some packages fail because of this issue and some bypass it like firefox does.
what's failing?
GitHub
Failures in chroot · Issue #135 · containers/bubblewrap
In our Endless image builder, we chroot into the ostree deployment to install apps with flatpak. The triggers always fail for 2 reasons: The slave mounting of / fails because the deployment directo...
Well, this is going to run for however long it runs. Hopefully I can login to my computer when it's done
The rebalance fixed my computer. Now to try to clean up some space so it doesn't happen soon again
It would be good to get a proper code review on this one before we merge into main bluefin.
but this should have all the changes that will be getting put in there.
Also when reviewing we'll need to think of yafti in the back of your head. 😄
yes, we need the fix on Yafti for system flatpak installation instead of user since I delete creating the user based repo
that PR should probably be separate.
"Kyle will know what to do"
also Surface images are broken.
unless he's busy changing the blue tones on the website for the 12th time today.
yeah, they've been busted a while
I feel called out
it's only 10 so far ty very much
hahahaha
I thought you said you wanted to get out of web dev 😉
alright, let's finish this thing
agreed. We need to fix surface images 🙂
https://github.com/ublue-os/bluefin/blob/main/usr/libexec/ublue-user-flatpak-manager
https://github.com/ublue-os/bluefin/blob/main/usr/libexec/ublue-system-flatpak-manager
we can't fix those until bsherman fixes HWE
these two can just be updates right?
oh dang, I didn't check the ostree remotes on the last test
I wasn't going to add the
--user
because we don't use it.
I did, it's good.yeah I think we just blow that away right?
@Kyle Gospo channel your inner bsherman and clean HOUSE.
we could. I enable the system level one and just replace all the code with flatpak update --system -y
right
any reason we're dropping user flatpak repos?
pretty sure fedora ships w/ both user and system
do we need them?
suppose not
I think it confuses users personally.
like if someone knows they want them they know how right?
that double dropdown in gnome-software sucks too
IMO blow it all away
and then I assume I no longer need to install flathub repo?
The ISO will just do that for me?
Yes.
cool, though we probably need to keep it
ISO will set up the flathub repo by default.
since an existing user will update and it'll be removed from their disk
because it's no longer on the image
just means we don't have to mess w/ it after the fact
true, let's try to not break existing people
oh.
good point.
actually.
it shouldn't.
worth testing
there's no way to remove that dumb "enable third party repos" question in the first run wizard is there?
I think it's a noop either way but it is kinda confusing, though benign
not that I'm aware of.
yeah, actually the repo is set in
/var/lib/flatpak
I can set the script to add the --user
repo I guess.
though, did bluefin break it for everyone anyway when we disabled the user manager service?I mean, I don't care, I'd rather nuke that entire user thing from orbit
we didn't migrate people we just set it up so new installs were system
I can add these 2 lines.
yeah but that adds it back?
we don't want that we just don't want to blow --user away if it already exists.
right, for backwards compatibility, we should make it available.
otherwise folks will complain their apps don't update
we can talk about it a bit more, I need to take a break and eat something.
actually.
wait.
this is etc.
it won't break it for people who already have it enabled.
right
wait.
it's just for new installs
I need to test.
there is /usr/etc and then there is /etc.
oh wait, kyle said there was shit in var
/usr/etc is immutable.
So here's the thing.
GitHub
bluefin/Containerfile at 29ef97baec8e233fda9c48330cc87a48a070e7c6 ·...
An interpretation of the Ubuntu spirit built on Fedora technology - ublue-os/bluefin
we still have it enabled in main.
let me get the fist part done ;p
so it's still creating the --user flatpak remote.
it's just installing nothing.
unfortunately for backwards compatibility we need to include it.
sorry! On the bright side, that fixes yafti 😉
since the remote will be enabled 😛
yeah but we just leave that intact right?
it doesn't need to be on the ISO
I have a fix for it.
just so you could see what yafti is currently running
wait why are you adding it to the system flatpak manager?
it's a workaround.
should probably be renamed.
no need to have multiple services for flatpak stuff.
@Noel Release v1.1.0 has been released
https://github.com/JasonN3/build-container-installer/releases/tag/v1.1.0
GitHub
Release v1.1.0 · JasonN3/build-container-installer
What's Changed
Breaking Changes 💥
Remove need for action_version by @JasonN3 in #27
New Features ✨
Use Skopeo instead of Podman by @JasonN3 in #30
Use bootc when available by @JasonN3 in #33...
I hope to fix those tonight as part of my HWE work
I am gonna hold off on doing anything until we can all chat. There could be some negative impact to users or side effects...
yeah I kinda ran outta steam
one extra day won't kill us
Did not get it done tonight @Noel but progrsss
actually I have an idea.
@Noel when you have the ISO I'll just remove oldyafti entirely
then when the new one is ready we slot it in
but like, everything important is on the ISO
the only reason we NEED it is to install the stuff that is now in the ISO
so when the ISOs land I'll PR it out.
I mean, I didn't install all the apps that are made available through Yafti.
I think current Yafti is fine for now.
yeah but those are optional
I mean..
I wouldn't consider discord an optional package for me 😛
if you want to wait for the yafti rewrite, that's fine.
@j0rge So unfortunately there is an issue with the release: https://github.com/JasonN3/build-container-installer/issues/89 Once this is sorted out, I can pin the version.
That's already fixed
trying it now.
As you guys work towards having PR container images, here's a workflow to clean up the registry so it doesn't get flooded with old containers that will never be needed again. It will clean up untagged (sha:....), PR containers for non-open PRs (pr-...), and old branch containers (anything not vX.X.X, pr-..., latest, or an open branch)
https://github.com/JasonN3/build-container-installer/blob/main/.github/workflows/clean_repo.yml
@j0rge @Kyle Gospo @Robert (p5) https://github.com/ublue-os/bluefin/pull/1034
Ready for review!
@Skynet Quick question: we don't have tests for testing installing with no flatpaks correct? Noticing some weird issues in this PR when moving to upstream. I think it might be related to the ARCH section, but not really sure: https://github.com/ublue-os/cosmic/pull/21
@Noel Disable flatpak dependencies
we did.
set it to false.
@Noel The fix is in main now. Go ahead and try it there. When you're ready for the PR you're working on to merge, I'll create a release. Same thing as last time
@Robert (p5) I'll amend your PR to test the changes skynet pushed.
@j0rge new ISOs published.
https://github.com/ublue-os/bluefin/actions/runs/8475608370
dangit
overlooked something.
checksum name doesn't match.
OK
@Kyle Gospo fix I need from you on bluefin's site. Could you fix example:
bluefin-latest-CHECKSUM
to point to bluefin-latest.iso-CHECKSUM
?Yep, can do
thank you!
once that's done, I can delete the old checksum files in R2
Should I make an issue for this @Kyle Gospo? Just want to make sure we don't forget this. Don't want to confuse users 😄
@Robert (p5) @Skynet confirmed running the workflow with
@main
works. I'll have P5 confirm, but it's probably fine to do a new point release: https://github.com/ublue-os/cosmic/actions/runs/8476138568/job/23225258171Great! Thanks Jason & Noel!
Will be able to test the ISO in a couple hours.
We've had one report of missing --user flatpaks in bluefin
After someone updates?
@Noel Do you want to try to build an ARM ISO? Or is an x86_64 ISO could enough for Cosmic
That would be a better question for @Robert (p5)
For now, I think x86_64 is enough, however I know there's some ongoing investigation into how we can build ARM container images, so we may want to do so later on.
Sounds good! @Skynet feel free to cut a minor release with that fix if you would be so kind! 🙂
v1.1.2 has been built
Amazing! Thanks!
Switched Cosmic PR to use the new tag and it's building now
@Kyle Gospo opened an issue: https://github.com/ublue-os/bluefin/issues/1067
@Skynet Something is funky with 1.1.2
1.1.1 works just fine to install flatpaks, but it's broken for 1.1.2 with our configuration using bluefin: https://github.com/ublue-os/bluefin/actions/runs/8492847197/job/23266283329#step:6:379
we reverted for now.
One thing at a time. Are you done with Cosmic?
yes, should be good there.
@Kyle Gospo @j0rge Have ISOs with flatpaks to test for Bazzite: https://github.com/ublue-os/bazzite/actions/runs/8529569094
GitHub
Build ISOs · ublue-os/bazzite@653114d
Bazzite is a custom image built upon Fedora Atomic Desktops that brings the best of Linux gaming to all of your devices - including your favorite handheld. - Build ISOs · ublue-os/bazzite@653114d
whelp, one sec, things seem messed up with the upload names.
I'll grab bazzite-gnome!
hopefully can sort that out in another PR.
nevermind, artifact names are just weird. nothing to do with the ISO names hopefully
I can fix that VERY easily. just a variable I overlooked.
@j0rge I will also do a PR for bluefin to do a multi file upload like I'm doing in Bazzite.
will make the workflow look simpler.
should be real easy.
also I can add torrent support if you'd like
sure
1/3 of the way through the bazzite download
sweet.
if everything looks good, this should land VERY soon 🙂
when you're done I'll cherry pick back into main
thanks for doing this 🙂
for sure! definitely test some ISOs to make sure you are happy with things.
I would check the deck images too. Make sure the extra boot vars are working.
testing a fully offline install of bazzite right now.
the installer does the user creation in anaconda in bazzite right, even in gnome?
correct.
you will need to create a user in anaconda for any edition of bazzite.
got it
It's because I don't ship any of the gnome first start stuff
And if a user should boot to Game mode without creating a user, well GG for that system
Better to just always make them ahead of time
flatpaks installing!
We have liftoff
fully offline installation with secure boot enabled.
It's beautiful
All that is needed for flatpaks when booting:
I'm so pleased this worked.
also approve this PR to fix up the artifact names if you would 🙂 https://github.com/ublue-os/bazzite/pull/939
gonna build a new set of ISOs with the new merged vars.
so, I would say, do some more testing, and then I can give you the commits you will need for main
I believe this is all you will need @Kyle Gospo
maybe the fsync one?
Yeah that one too
Awesome
sweet.
bang on the tires a bit please.
check the deck image and make sure the extra boot vars are there. I gotta dip out for a few hours.
done!
I also didn't test Yafti, but I'm pretty sure it should work as I saw the flathub remote for user in there.
Confirmed the new ISO doesn't have EXTRA_BOOT_PARAMS in the deck images. checking into why.
@Kyle Gospo we are blocked until this is resolved. talking to Jason about it now.
I installed the world with it last night on bazzite. Outside of the known issues it appeared to do what it do.
if you have an updated image i am happy to run tests again.
Not yet. Just waiting on a fix from upstream which will hopefully be coming soon.
Happy to run more tests once that fix gets pushed. Just let me know!
@Skynet something looks a bit goofy at the end of the workflow for Bluefin. It is uploading 244 files for some reason...
https://github.com/ublue-os/bluefin/actions/runs/8565408313/job/23473424088#step:7:12
This line is suspicious (doesn't appear flatpaks are getting installed?): https://github.com/ublue-os/bluefin/actions/runs/8565408313/job/23473424088#step:6:1583
also it downloaded the container twice? https://github.com/ublue-os/bluefin/actions/runs/8565408313/job/23473424088#step:6:1583
Everything looks pretty normal otherwise except the ending portion.
Here is the workflow I'm using for Bluefin: https://github.com/ublue-os/bluefin/blob/testing/.github/workflows/build_iso.yml
There's what you need to fix: https://github.com/ublue-os/bluefin/pull/1090
GitHub
fix: use path by JasonN3 · Pull Request #1090 · ublue-os/bluefin
Use the complete path to the ISO instead of uploading the entire directory that contains the ISO
@Noel Try running the workflow from that PR. Not sure if your workflow can run from a PR, but feel free to merge it if you can't run it from there
Unfortunately I can't run workflows from a PR, but I can go ahead and merge it later today and see if it works.
Thanks for putting in the PR! I wasn't sure on the make_target parameters either, but that is much more clear now as well!
getting this on build: https://github.com/ublue-os/bluefin/actions/runs/8583351116/job/23522456937
something goofy with the makefile perhaps?
I was able to work around it by using our old workaround, with the deps branch.
not sure why there is an error from the code you had before. For now in bluefin, I just commented it out to test out using our old method just to get something to build.
fyi if you're not watching the bluefin channel, m2's in the middle of reorging the repo and enabling aurora btw
Yeah, I saw.
I got some stuff in place with my PR for supporting flatpaks already.
Is he doing everything in one build.yml workflow for the container builds?
yeah we're still rolling with a single build yml
didn't get sherminated yet
Ok. I'll do the same for build ISO.
I likely can just do a matrix that includes bluefin and Aurora.
OK, not quite, mokutil isn't popping when installing with secure boot on. Something got missed there.
what's the tldr on for bazzite, are you using the latest version of the iso action?
yes, working towards that.
There's a few issues.
ok, so it makes sense for us to just stand by and then enable the ISOs with the new version and not do it with what we have now and then go through the upgrade?
no worries, we can get the images going and have people test the image itself in the meantime
yeah, I'd rather just put out iso with flatpaks for bazzite when I can have all of our stuff on the same version.
The version bluefin is using for flatpaks doesn't work for bazzite because extra_boot_args is broken in that version.
now it's fixed, but there is a separate issue with secure boot support is not working.
OK, the key isn't getting copied over.
@M2 So couple of thoughts I had:
1. I see you commented out the
build_iso
workflow at the end of your PR. The reason I had that there initially was to build images in the testing branch for bluefin so I could start to finish build a testing image and then build an ISO from that testing image. Ideally, it would be good to keep that workflow if possible, just not sure how we would do that with splitting out the builds
2. Is it possible to PR your branch against testing first so we can test building ISOs using your workflow? That way, our first test isn't uploading directly to Cloudflare and we can test the ISOs that come out the other side
3. Could you change your version to @main
instead of 1.1.1, I want to test some fixes. We will get a pinned version once testing has concluded.oh also, not sure how we wanna do R2 but if it's easier to use universal-blue.org as the shortcut for the ISO URLs so it's consistent between aurora and bluefin then we can do that. Unless it doesn't matter.
@Noel
1. I had commented it out since our build_iso would build on all successful runs of the reusable build. That would be 5x runs of the iso_build. We could refactor the reusable-build-iso to instead take the same inputs as reusable-build. It would split, but that might be advantageous since we already just have a matrix variable for version right now.
2. I can change the target of the pull request.
3. Yes.
@Noel Please look at changes
https://github.com/ublue-os/bluefin/pull/1103
It's now going into testing
ooof looks like a workflow depth issue.
That's a bummer.
I'll take a look deeper at it tonight.
I'll add some quality of life stuff from bazzite as well.
Well, I rebased onto your testing branch. that was unfun
I'm sorry.
Do not accept that if the commented out thing is important.
Oh you're talking about the make target thing?
Ideally, I would like that, but I would need to work on figuring out why it's broken.
Yeah, we have it in a previous commit. I'm ok with removing it for now.
Now thats a lot of red
Yeah, something with the reusable workflows is goofy in how many levels can be run.
?
Workflows are failing.
In testing.
ah
the name of the fix should be enough to see the bid dumb dumb mistake
Lol oh no. I didn't catch that. I'm also on my phone though.
don't merge. Looking through the changelog, it stripped a lot of things
Yeah, the additional things I wanted to do? Use one upload folder, etc. ?
I saw you added that back
Honestly, I'm just putting things that got removed by the merge back.
There was divergence between testing and main (obviously) and I wrote the original against main
Sorry, definitely a very, very messy merge onto testing
No worries. Sorry about that. Been working on this for a bit and then Aurora merger happened in the middle of it lol
I really, really should of done the ISO change on testing branch
Okay I'm done
Sounds good. I'll take a look tonight.
@M2 I'm going to merge in your changes to testing and see if it builds some Bluefin and Aurora ISOs. Only way to know if it's working is to merge it in 🙂
actually few fixes, standby.
figured it's kinda janky
@M2 some minor papercuts so far, but things are building.
except for Bluefin 38, something is goofy there.
also this showed up in the build for aurora, not sure what's going on there lol
https://github.com/ublue-os/bluefin/actions/runs/8634834779/job/23671765434
how did you do an ISO build combo like that?
gonna try to figure it out lol
38 is building way less images for some reason as well.
asus is 39 only now so those for sure won't be building
I think I found the error.
This section is likely not needed. Must have gotten added in when the merge happened: https://github.com/ublue-os/bluefin/blob/03838015a71d41ccbfff7d6ef168b0ef364f9418/.github/workflows/reusable-build-iso.yml#L51-L55
actually scratch that..
logic got redone.
Aurora 39 and Bluefin 39 built successfully.
albeit with that weird inclusion in the matrix for some reason.
Just need to fix whatever is going on there and issues with the framework images building for 38 and we are good I think.
only a little while longer until both of those leave.
That was from before. We can do that logic as bash variables
That probably would fix the goofiness.
yup, just gotta fix a few minor issues and we should be good.
just tested aurora.
removed the weird include portion. It is now getting GTS vs Latest via input.Fedora_version
I just pushed a commit to fix the same thing lol
directly to the testing branch.
mind doing some pair programming for a bit?
Yepp. I'm just in the lab and can hop on voice
sounds good, in office hours channel
@j0rge as an approver, I should have permissions to see how many builders we have running right?
Not sure, try it, go to the org settings, under actions->runners
https://github.com/organizations/ublue-os/settings/actions/hosted-runners
I can't see org settings
probably why
there used to be a service I used that connected to github and made all sorts of builder stats and stuff but they went out of business
Like I would love to slurp out of here so we have the # of builders, etc. in discord, like print out the # of builders when something gets queued up
also I like how Fyra splits their github channels, one for changes, one that is builds only, that's nice.
@M2 github is back up.
also, I'll do a fix on testing
heh, each run is exactly 59 builders
https://github.com/ublue-os/bluefin/pull/1116/files
woohoo! https://github.com/JasonN3/build-container-installer/releases/tag/v1.2.0
Once the container builds, I will be pinning to that release shortly! 🙂
release has been pinned.
@Kyle Gospo if you have some time this evening, we can get everything merged in for building ISOs with flatpaks. So long as the tests pass of course.
Gonna do a handful of tests on the pinned releases.
If anyone wants to help with ISO testing: (workflows are almost done)
Bluefin 38 (Testing): https://github.com/ublue-os/bluefin/actions/runs/8651017131
Bluefin 39 (Testing): https://github.com/ublue-os/bluefin/actions/runs/8651017136
Aurora 39 (Testing): https://github.com/ublue-os/bluefin/actions/runs/8651017134
Bazzite (Testing): https://github.com/ublue-os/bazzite/actions/runs/8650996640
Main things to look out for:
1. Does it prompt mokutil when installing with secure boot?
2. Are extra boot options showing up correctly in the bazzite-deck images?
3. Are flatpaks installing properly (and remotes properly set up)?
4. Can you run a flatpak update without issue?
5. Is rpm-ostree pointing to correct upstream?
6. Can you do an offline install?
I would be also interested in
flatpak remotes
output
it should be just system-wide flathub
without Fedora repo, right?yes.
no fedora remote should exist.
I'll test Aurora
Using GNOME Boxes w/ UEFI -- let's see if it emulates secureboot
Virt-Manager does have secure boot support which is how I've been testing secure boot.
unsure if gnome-boxes has signed firmware
Is rpm-ostree pointing to correct upstream?signed!
yes. it should be the signed image, not unverified.
github is pulling a funny with bazzite. Gonna just rerun it...
offline install seems to work surprisingly well
It's using /run/install/repo instead of the ghcr
enrolling mok worked
as part of install?
yes
excellent.
I had to type ublue-os though
It's not painfully obvious that is the password
yeah, unfortunately we have users use documentation until we can get our keys added.
Our secure boot key is never going to be added
They were talking about the signing key for the oci images
It should be possible to pass the password into the mok enrollment, we did in the old ISO
oh yeah, I got those 2 confused. sorry.
does the script just need to be adjusted?
If you look at the old ISO it quite literally just passed it in as a string
I would just make sure our script is doing the exact same thing, that's all there should be to it
If we're already doing that then I wouldn't spend extra effort trying to figure it out
Trivial to add some additional in your face documentation about this
I mean, we yanked the exact script.
flathub is the only remote listed. Looks good.
Why do we not use the fedora flatpaks? Too opinionated?
less software availability.
Well, yes, but I suppose having both enabled wouldn't be a concern?
It's confusing to the user.
Fair enough
flatpak update works fine 🙂 rpm-ostree is pointing at the signed image.
All looks good 🙂
Why no 40 ISOs?
It looks like it would be easy to add using matrix
we aren't publishing them until they are ready.
jobs:
build_isos:
name: Build Aurora ${{ matrix.fedora_version }}
uses: ./.github/workflows/reusable-build-iso.yml
secrets: inherit
with:
brand_name: aurora
fedora_version: ${{ matrix.fedora_version }}
strategy:
matrix:
fedora_version: [39, 40]
I thought storage was free on GitHub 😉
further down we are uploading to cloudflare.
Because of faster repos?
As an end user I do not mind waiting as long as it is saving the project money
Cloudflare also offers grants to open source projects
so maybe they'll host for free
we'll test 40 ISOs when 40 is out
https://github.com/ublue-os/bluefin/pull/1122
Threw this together real quick for when we want to enable it.
Maybe a little too eager on that one -- I love this stuff and new versions get me excited to get them building and try it out.
Obviously we can just rebase from 39 though - that is what I did
right, remember though, when you add a version you double the images
which is why we concentrate on completing the entire lifecycle loop first, then you blow the matrix up
like let's say you typoed something on a PR, now you have to go cancel all those builds by hand
We do not pay storage per GB or anything in GitHub do we?
Since it is a public repo that should be free
no, we pay per GB on R2
I have no doubt that R2 is probably WAY more performant than GHCR, but it's hard to compete with free
it's not price that's the issue, it's maxing out 60 out of 60 builders to test ISOs when you can do it with 12, do all the testing, get it all working, and THEN you max out the builders
Make sense
our images are too big to fit on github
There is a limit?
I had no idea
I mean that makes sense
they'll attach them to workflows but for releases it's 4GB per file max
4GB is pretty generous, I wonder if we can leverage compression to get down to that size
not even close, already investigated it
it seems counterintuitive, to get 40 fastest the way to do it is to test the ISO loop as efficiently as possible because Kyle needs as much build capacity as possible to get bazzite and all the RPMs building on F40
so if the builders are maxed out the entire development slows down across ALL repos.
so we break it down and go in order
FYI:
"Making core contributions to a qualifying open source project? Drop a line to [email protected] with a link to the project’s landing page, repo, and a description of what engineering tools or resources your project provides to the developer community."
They might give you some resources for free.
Is the plan to continue having a history of 90 days of builds in cloudflare like we do in GitHub? That could get extremely expensive
no, we do not keep 90 days of isos in cloudflare.
We're planning (or already have) a backup of a known-good ISO stored separately, but there's not too much need for many ISOs because after the setup, they will still point to the same image
I wonder if we can incorporate some automated testing using openqa or something
where it installs the ISO and validates it boots to a desktop
openQA
openQA is a testing framework mainly for distributions
I haven't explored that.
could be useful.
To clarify, only the ISOs are going on R2, right? We are keeping images on GHCR?
yes.
That makes sense 🙂
So we're basically saying 6-7GB per variant ISO ... that might not be horrible
Yeah, and some handheld isos probably pushing around 10
Due to being a live CD ideally
I hope we can knock that down a bit.
Live ISOs are not a priority for me.
at some point, maybe.
we good for an aurora ISO run?
@Niklas ⚡ did we want to finish with branding or just get something out there?
we'll be doing fixes all the way from now on so might as well make sure the loop works
True. Lemme see if the right workflows published.
Wasn't sure on that until we merged into main.
@j0rge gonna run bluefin first.
do you have a backup?
I don't
for future reference, this is the one you'll want to use: https://github.com/ublue-os/bluefin/actions/workflows/build-bluefin-iso.yml
YOLO?
no
keeping my finger off the trigger.
why not aurora?
Aurora already has ISOs from his branch up. I'd rather wait for @Niklas ⚡ to give the thumbs up.
I'm not testing ISOs tonight, but if you wanna play roulette and they break, you gotta fix them. 😄
or maybe we can make a test run one that does one aurora and one bluefin but dumps it in an R2 testing bucket to exercise that function
might be good to build that out.
not tonight for me.
yeah I'm thinking not tonight for any of this lol
it's 7pm, and "real quick" and ISOs never mix, just trying to avoid another 2am
agreed. There is no rush on bluefin.
it's literally just using a new version of the ISO workflow to make sure it works.
and maybe some additional features from when we last spun ISOs.
at any rate, don't sweat it.
actually I'm more worried about announcement drafts, working on those lol
Im gonna get some branding / wallpapers into the ISO but not apply those to kde just yet
Sounds good. Would you like us to push up a new ISO tomorrow to your cloudflare instance?
Yep do it
Sounds good, too tired to do it now, but I'll go ahead and push it up sometime tomorrow 😄
Oh yeah go to sleep lol
niklas has perms in the repo now, you can click the Aurora ISO generate action any time you need it
Tested Aurora Uploaded ISOs. we are good to go.
Are podman instructions available for this? https://github.com/JasonN3/build-container-installer
GitHub
GitHub - JasonN3/build-container-installer: Creates an ISO for inst...
Creates an ISO for installing a container image as an OS - JasonN3/build-container-installer
thinking of people who want to make an ISO but don't have docker setup
Podman is interchangeable with Docker. I can PR some in.
wasn't there a gotcha with docker working but not podman? or was that resolved with adding
mknod
?Solved with mknod.
excellent
yeah I was thinking for people who keep asking about F40 ISOs, we can just be like "here you go"
I haven't heard of that one and it sounds like exactly what I've been building in the ISO repo. I'll definitely look into that and I might end up replacing my tests with it. Noel is looking to steal my tests for UBlue, so he might have to wait till this is tested first lol
Reopening this thread to discuss solutions around removing root account creation as an option in the ISO installer.
I'm open to options. We can either find a way to hide it or try to figure something else out.
i'd suggest docs, but we know no one reads them 😉
Is there not something so simple as "disable this feature" flags in anaconda?
I've been digging through the docs.
I can't find anything.
closest thing we can find is this.
it's mostly around branding though: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/anaconda_customization_guide/index
Red Hat Customer Portal
Anaconda Customization Guide Red Hat Enterprise Linux 7 | Red Hat C...
Anaconda is the installer used by Red Hat Enterprise Linux, Fedora, and their derivatives. This document contains information necessary for customizing it. Developers who wish to expand the base functionality of the installer will find information about Anaconda architecture, its add-on API and provided helper functions, and examples which will ...
How has Bazzite been doing with the web installer? I imagine adding a flag to the web installer would be easier than anaconda if no existing functionality exists
web installer doesn't work because %post scripts don't work.
it's beautiful because it scales properly on the steam deck.
but that means we lose that automatic setting it to the proper upstream image portion.
and also enrolling our key.
for MOK
there is an open PR for it, but no one on the Anaconda team can prioritize it: https://github.com/rhinstaller/anaconda/pull/5504
I'm not adept at python enough to make heads or tails of what they need to finish.
I'm hoping to get more details tomorrow from them.
I have an idea how to fix it.
Silverblue uses a feature called spokes to hide the user and password spokes
and network too.
I wonder if the PasswordSpoke and UserSpoke are different things.
https://github.com/rhinstaller/anaconda/blob/fedora-40/pyanaconda/ui/gui/spokes/root_password.py
They hide the user because gnome allows you to make one?
yes.
I'm curious what happens if you just hide the password spoke.
live ISO is easiest to test because I can change the settings in the profile and anaconda should hopefully load them up?
yup.
I figured it out.
@Kyle Gospo we need to add:
to the kinoite profile for anaconda.
or
better, we could define our own as an rpm package and it uses the profile based on stuff in /etc/os-release.
While you're there might as well do the css too
right.
we could add branding and colors and stuff.
this would apply for the old installer, but it's what we have for now.
The old installer is kinda bad
But
Waiting for web UI or shipping it too early is not that good of a solution either
Can you also specify some default partitioning or sth
with the profiles, yes.
that's what they are designed to do.
also could create a default user.
we could have 2 different profiles. One for bazzite and one for bazzite deck.
also
another thing.
initial-setup-gui
and initial-setup-gui-wayland-plasma
are packages that exist.
@Kyle Gospo what was the reason we yanked the gnome welcome screen from Bazzite-Deck-Gnome? Was it because of gamescope?
only reason I ask about the greeters is that we may simplify what our anaconda config has to look like if we offload the work onto first login.But first login would only work for gnome
these packages are the answer for KDE apparently.
They don't use them currently because of a few bugs they are working out.
The installer OCI part is custom you said?
the packages I listed above would be added to the OCI image.
along with the gnome-greeter.
Unrelated
But the oci part is custom?
it's done in a kickstart file.
specifically the interactive-defaults.ks file
Link?
You need to fix the error
It needs to say what failed
basically I'm thinking all I would need to do is append: https://discord.com/channels/1072614816579063828/1192504002252914791/1234578263255285901 to
/etc/anaconda/profile.d/fedora-kinoite.conf
and that should remove the ability to create a root password from the gui
what?When this fails the installer just says it returns -1
That's useless for debugging
I'm not sure I understand.
We get this error 1-3x per day
that is something in anaconda.
No
not something we can control
I need to look into what the template file has in it
But I don't think that's entirely anacondas fault
The same goes for the progress bar
this uses kickstart which is implemented by anaconda.
it is entirely up to anaconda how they do it.
upstream silverblue and kinoite do the same thing.
there is nothing we do in build-container installer that handles that return code.
@Skynet feel free to weigh in on this.
it should show some logs or something
it cant just bail and eat the error
you can get the logs if you switch TTYs.
I am aware of this config file yes.
shows the deprecation warning
also aware of that.
doesnt say hot to replace it
not everything in the config file works properly yet.
and they're replacing anaconda?
they are not replacing anaconda. The anaconda-webui is the interface.
underlying bits are the same. It's still in development.
no one has switched to it yet.
Anaconda is a hot mess 🙂
I'll work on writing up a lorax template to remove the root account password stuff.
should hopefully be a good start.
Fedora Docs
[f36] Kickstart Syntax Reference
Learn more about Fedora Linux, the Fedora Project & the Fedora Community.
still doesnt say how to replace
how to replace what?
the deprecated file
kickstart?
no it doesn't lol
Yeah, there's nothing I can do to change the error message. That's just how Anaconda shows it. I'm also not trying to drastically alter Anaconda. If Anaconda needs to be changed, it should be changed in Anaconda
if all the oci logic is in the ostreecontainer command, i dont see a reason calamares cant be used
still doesnt fix the progress bar issue tho
how does bootc work?
Calamares looks like a completely different installer, which would require a complete redo. The goal was to replicate what Fedora is doing, but in a more flexible way that could be customized to our use
interesting
anyway current problems with the installer can be boiled down to: root account, partitioning is messy, no branding, no controller support, no progress bar, iso size, requries keyboard
ISO size is nothing we can do anything about.
if it's an offline ISO.
you can shave at least some gigs there
if you use default parititioning on one disk, it works fine.
Branding is just some files, so that could be altered
yeah most of the stuff is minor fixes
yeah, I will work on writing up a lorax template to fix up the root account thing first.
i made a lil lizard script that allows using a handheld controller as a mouse https://github.com/hhd-dev/jkbd too so that + having a default user fixes the keyboard issue
that should be easy enough without making major changes.
for deck images we could add deck:deck as the default user.
also the ostree command can allow making a net installer iso, however bad of an idea that is
it is a terrible idea.
it's what we had before.
yeah but the installer was different right?
no
it used the same anaconda command.
so it was the same crappy installer rip
only pulling from online instead of from a local file system on the ISO.
the only difference was that it crashed or sth?
when network failed
if your internet blipped at all, it would kill the install and wouldn't retry.
give you just as cryptic of an error as the one you linked above.
so are flatpacks part of the image now?
yes
usually that error crops up when the users disk is having issues or the USB they are using is flaky.
part of the ISO
wheres the command that installs them
no more post-installation flatpak installation script
Couple of places
https://github.com/JasonN3/build-container-installer/blob/main/lorax_templates/flatpak_set_repo.tmpl
this sets the repo
https://github.com/JasonN3/build-container-installer/blob/main/lorax_templates/flatpak_link.tmpl
this sets a symlink for where the flatpaks live on the ISO
anaconda takes care of the rest.
there is also some post scripts that run.
https://github.com/JasonN3/build-container-installer/blob/main/lorax_templates/scripts/post/flatpak_configure
this adds a gpg key.
so this jason guy, you're talking with
but the action is considered upstream of some sort
That's me
lmao
who's that Jason guy? 😉
at any rate. getting a new branch going to fix up the root account thing.
yep, that can be fixed for the next iso
gonna do it in testing first.
should be pretty simple.
hopefully
would also be nice to tweak that kickstart file to allow for a proper network iso in the future
that does retry downloads
that would be in the python code that anaconda uses.
instead of dying
https://github.com/pykickstart/pykickstart/blob/master/pykickstart/commands/ostreecontainer.py here?
GitHub
pykickstart/pykickstart/commands/ostreecontainer.py at master · pyk...
python module for parsing and writing kickstart configs - pykickstart/pykickstart
we did toy around with the idea of maybe trying to download the container image into temporary space and then deploy from there?
I think there's an issue listed for that. My plan is to create a pre-script that will loop until it reaches a retry limit or it succeeds at downloading the container. One downside, is that it will need somewhere to download it, but the only writable place will be a ramdisk
yeah but you have to be careful because tmp space will by default be ram right?
you can probably use the disk the way its set up, if its after partitioning
yeah, an 8GB image would destroy most users lol
would need 8gb somewhere
im looking at that ostree py file and it runs nothing
might be upstream to the ostree project then.
oh no the file you sent generates the command
its not the command
weird
cant find it anywhere
ideal part imo would be to boot the image using bootc as a livecd and install it, however far fetched that is
although its still not clear if bootc actually boots an oci image
all of that stuff is in progress, just not usable today
Regarding bootc?
I would agree.
yeah, I'll have more info next week
True.
I haven't synced with colin in like 2 years other than a handful of calls so I'm just going to get the entire dump from all of them
Right. Ideally if bootc-image-builder allows for more customization to it, we could potentially use it. The problem is our entire backend would need to support bootc.
It's too early to tell.
Doesn't stop our anaconda woes though.
the installer running bootc does not mean ublue needs to funnily
but the only benefit would be space savings
youd be able to reuse the storage space of the image to have a live iso
if it needs to do some modifications to the image to use it thats not beneficial
can anaconda run on a live ISO?
yes. It's what Fedora uses for their live ISOs
so something to look into i guess
fedora atomic desktop does not though
i don't know if it's for a technical reason or it's because it was felt unnecessary
Probably space reuse
@Kyle Gospo still wondering on this 🙂
on deck images it's because they boot straight to gamemode, which expects a user
on desktop it's for consistency
might as well make sure all bazzite images work the same way
understood. I was thinking we could solve the user creation issue by offloading it to the first boot configuration.
as long as the KDE one works well.
kde has one now?
actually nvm, not worth worrying about anyway -- it doesn't solve anything
per Neal, we can use these: https://discord.com/channels/1072614816579063828/1192504002252914791/1234580422889836545
no keyboard unless you log into steam, unless we also ship the original keyboards
which we avoid because they're kinda no-good on handhelds
oh, does steam need to be open in order for the keyboard to work?
yep
and logged in
in desktop mode?
unless you're in gaming mode
yep
that's silly.
Seeing these, what do these entail? This might be helpful for Aurora
not 100% sure.
KDE isn't using them upstream for a few reasons.
https://pagure.io/fedora-kde/SIG/issue/119
https://pagure.io/fedora-kde/SIG/issue/5
@Kyle Gospo working on this PR to remove root password option.
I tested it earlier by hand and it works as long as the lorax template adds it properly: https://github.com/ublue-os/bazzite/pull/1052
Can someone snag a pic of the new installer next time you can? I need it for the slide deck for the summit preso
The web GUI or the GTK one?
webUI please
Will do give me a few.
Do you care if it's Bazzite or Bluefin? I already have ISOs built for Bazzite.
I can always draft PR for bluefin to set the switch on.
whatever works, it's going to go next to a picutre of the actions running
"oooh what is that?" is the vibe I'm shooting for since I expect most of them haven't seen new anaconda
true
For the sake of consistency, if you are mostly showing off bluefin, I'll get you an ISO build of that.
also let's you play with it if you'd like.
I'm showing all of them, it's fine
OK
I'll grab pics from the Bazzite one then 😛
@j0rge
I only needed one lol
but yeah, awesome anyway!
I got you all the screens just in case 😛
I want this installer so bad
noel
also, I'll have to look at the default profile, but it looks like it wants you to create a user in the installer.
not the bazzite one
testing that in a sec.
ah ok!!!
this looks great
I'm gonna guess that hiding the spokes no longer works in the web UI.
Another bug report..
yay.
:dispair:
#root-account { display:none!important;visibility:hidden!important;opacity:0!important; } /* eat shit */
probably something like that 😉
gonna submit a issue for that.
I'm not going to bother testing Bazzite. Silverblue is supposed to hide more things and it didn't.
likely just not hooked up yet.
1 step forward, 2 steps back.
can't win 🙂
hey noel
:dispair:
my fakenitro emojis broke
even worse
oh never mind, kyle got it
I'm like, hide the checkbox!
I want to. It broke though 😦
I recommend the jorge methodology
"Make Kyle do it."
if you want the Web UI faster, that's what it will have to be. Unless I learn python out of nowhere and get really competent with it.
I'm tempted given so much of what I touch uses it. Ansible, Anaconda, <insert the next problem ublue has for me>, etc.
I think we just cut out the buttons and call it a day
I mean true... But we could be good open source contributors and fix the problem? 😄
Sure.
But also it's duct tape until bootc
is bootc not going to use anaconda?
As far as I can tell, bootc-image-builder uses anaconda.
looks really good
the root account is no biggie
just prefill the username/password which is not optional now
and it also says its an administrator
that way no keyboard required
the ui looks like it fits smaller screens too
It does, I tested it.
then add my little script for controller support and its ❤️🔥
also skin it a bit, put a bazzite logo and change the header
Is this real? 🥹🥹🥹
working on having a just script for bluefin repo that builds ISO like the action right now. Holy smokes is building these ISOs slow
That's handy though, I would use that
Takes like an hour
That seems way longer, what hardware
It's also doing the flatpaks
Should be faster than the action
Oh, network limited then I bet
Yepp
At airport
On my hardware, it's way faster than as a GitHub action.
I'm guessing testing on an unplugged laptop with airport wifi isn't the best
What I'm working on is in a bluefin pr. Need to update the dev-container to docker defaults from podman defaults