Better homebrew installation
Better homebrew installation: https://github.com/ublue-os/bluefin/issues/1290
237 Replies
@Robert (p5) you showed an action that installed podman in CI the other day, I wonder if we can use some of brew's CI features to get what we want in the image.
https://github.com/Homebrew/actions
Installing podman with brew inside GitHub action doesn't work due to this issue
https://github.com/containers/podman/issues/22152
GitHub
Podman 5.0.0-1 from linuxbrew does not build containerfile on Ubunt...
Issue Description https://github.com/skupperproject/skupper-router/actions/runs/8392267222/job/23044730533 ==> Downloading https://ghcr.io/v2/homebrew/core/podman/manifests/5.0.0-1 ... ==> Po...
while issue is closed, it's still reproducible
not directly connected with your intended question, but just adding a note that some issue like this can potentially happen
yeah I'm not thinking for complex services either, but more every day tools
Okay, I think is doable right now.
The CLI and non-interactive flags will be helpful. So long as we are inside a podman container or docker container we can run the installer script as root
and we can chown the dir at the end to uid:gid 1000:1000
so before we have user name
ooooh
now, trying to figure out how to workaround the /home location
since normally things coming from the image should be immutable in /usr
I'm pretty sure the home location is required isn't it?
so my initial thought is we install via a bind mount from /usr/share/homebrew or something. And then have an overlay mount at boot time to make the location mutable and the overlay comes from something /var/lib/homebrew
yes, we want to use the default location so users don't have to recompile anyway
nod
so it's how do we have it in a proper location while still working around the confines of rpm-ostree
https://formulae.brew.sh/analytics/os-version/30d/
still not sure why we don't show up here
it uses lib_release or something like that and we show up as Fedora
bind mount inside of build container works. So setting up a service file to do the bind mount and overlay on boot
lowerdir will be /usr/share/homebrew upperdir will be /var/lib/homebrew
if you PR let's do it in a testing branch or something, I'd love to bang on this for a bit
Also happy to test drive this
we have some stuff in the testing branch right now for the bootc stuff
but yeah we can add it there
i think we need to do symlink for build time since we don't use the privileged flag when building the container and can't do bind mounts inside the container
yeah, IMO let's toss all the new stuff in testing
and then pull judiciously into main, heh
okay I think I have a working idea.
doing a local build
Local build worked but action build failed due to the directory already existing. Can workaround but off difference
Extremely bizarre failures
probably should have done it in a seperate PR?
but whatever, next time, heh
I said we were already using testing....
But I might be reverting all of this.
The installer is complaining that /home exists
yeah
I wonder if we can -f? checking manpage
The script didn't fail like this on my local build....
Currently concerned about docker vs podman right now and ancient podman on our builder
the builders won't move to 24.04 until this summer
https://github.com/github/roadmap/issues/958
https://github.com/actions/runner-images/issues/9691
beta mid-may
Okay. It doesn't think it's in CI for some reason.
So I'm going to have to switch to building with podman locally. Docker vs podman logic likely biting us
oh man, just realized
with it preinstalled we could put brewfiles with sets of packages in /usr/share/ublue-os
ujust k8s-tools
etc would just brew bundle install -f=/usr/share/ublue-os/Brewfile.k8s
or whatever
then we just ship "personas" for different tools right on the disk in simple text files.Going to rollback the changes in testing and make a new branch.
Looks like we're getting bit by old podman or something since the build script despite having the CI variables set doesn't believe we are in a container.
There is a check for making sure the installer isn't running as root but it's supposed to not hit the exit condition if certain files are present indicating it's inside a runner of some sort.
It successfully detects when it's run inside a docker container.
so ... docker time for bluefin? ๐
Most likely hence the other thread
Since I already made the necessary changes to the Containerfile for docker to work with it
Uhmmm. I can just grab brew directory from bluefin-cli.
Going to try that before rewriting the actions
I was also showerthinking ...
the wolfi folks had to figure out a bunch of stuff to get it packaged, maybe that's worth looking at
like, maybe we can do a "install brew" action that is just snagging it from wolfi/bluefin-cli
That's exactly what I'm thinking. Use a COPY line and snag it
We should consider making another Wolfi box that is just things we copy from so we don't have to download as large of an image as part of the build if we keep using this pattern
just lmk what you want in it. ๐
Right now I find it funny that I download like a gig just for atuin
But if we switch to brew.... We can have it there
oh in the builder? yeah lol
we used to have all sorts of COPYs back in the day
Yepp...
But will test out this afternoon with putting the Brew directory in /usr
And then using an overlay for it to work on bluefin
oh, well if we're using bluefin-cli for atuin then copying brew outta there shouldn't redownload the container right?
Nope. Since I download it once using FROM
brew is on image. Now working through mounting everything so it thinks nothing is amiss
building test iso right now so I can play with it
Brew is on image. The overlaysfs is working. Unlinking lower dir doesn't work so need to either bring ruby onto image or think of a better method of handling dirs that homebrew expects to be able to manipulate
Need to make sure that
https://github.com/Homebrew/brew/blob/master/Library%2FHomebrew%2Fbrew.sh#L238
We'll have to move to docker if we want to do brew install or just touch that file first.
GitHub
brew/Library/Homebrew/brew.sh at master ยท Homebrew/brew
๐บ The missing package manager for macOS (or Linux) - Homebrew/brew
During build time.
I'm leaning towards just chowning the lowerdir to 1000 as part of cleanup.
And I got brew to install something during build time
nice!!!
bluefin-cli I need you!
no bluefin-cli used with this
I meant the packages
The Good Enough Shape lol
now I think brewfiles should be distributed and we run them on first boot or something
yeah or ujust them
for like, a k8s setup, etc.
since, I'm pretty sure everything in this directory will effectively be doubled once things get going
will need to test some of installing in a booted system
effectively, we're putting brew "in place" but since everything needs to be shipped as part of /usr, the overlay will double updates
you mean it'll update twice?
not quite.
Everything we do is in /usr. But the user cannot write there. So I'm using an overlayfs with /var
but brew wants to update everything together so it will bring the updated binary to /var.
so once they start using it, effectively everything should be in /var and the stuff in /usr is masked
unless I understand overlayfs wrong
so i would lean to keeping what's in /usr as small as possible
or we look into preventing brew from updating itself somehow
you can pass it a flag
wolfi had BREW_NO_UPDATE or something like that
but then the updates come with the image updates instead of brew itself?
or do you mean brew itself
brew itself
oh so it's like bluefin-cli and wolfi, same behavior
yeah.
but the packages themselves update independently?
but I'm unsure if that's fully possible right now
and hopefully this will mean automatic updates will work?
we need to fix ublue-update for that
ah ok
just need to pass the homebrew path if it exists to it so it knows that brew is there
sleep time
but, we're much closer now to brew being shipped on image
please put up some brewfiles for what our bling should look like
k
I'll add them to the issue
going to bed in a little bit too
โค๏ธ
Should we support multi-user out of the box, or a ujust?
The main thing is convincing brew to not update itself.
We also may want an extremely minimal tool installed in CI in order to make sure the portable ruby install is being installed as part of the image.
But reading the brew docs, it really wants a single user to own it. And to sudo to it when doing brew commands.
https://man.archlinux.org/man/sysusers.d.5.en
Can use this for creating the dedicated brew user.
Then have a global alias for brew to run it as this user either using sudo or systemd-run
I'd just leave it single user
Another idea.
We copy the install into location on first boot. Then it's just managed by standard brew actions. No overlayfs.
Basically biggest concern is that the user runs brew update and it does nothing
yeah but what if the user isn't on the network?
Won't work now either unless we are are using brew install during build
yeah
So.... If the goal is to get brew in place without having to run the install script. I can do that.
If the goal is to distribute things on image using brew. We can do it but it introduces things that brew isn't expecting.
Brew expects to be able to own its directory and the read only aspect of it makes it a bit different
when/if mike responds to my mail let's see if they have any recommendations
Okay.
But I have working concept for both methods right now. Though I have some reservations about the overlay method.
brew untar's stuff so I'll need to move the action over to using docker or wait until 24.04 runners get released
If you just want to rev fast, I vote throw this in the build for now
will just throw that in right now
I wonder if we just ask for
ubuntu-24.04
what happens, like it'd fail and then sometime in the next 2 weeks it would start working, hehi think they simply don't exist yet publicly
well, added that snippet for now
Also wrote some service files for auto updating and upgrading brew if mutable or on image
@Robert (p5) That works quite well.
We'll need to split the brew just commands in config out so we can overwrite them.
but we now have auto update timers.
by the way.... loving having these build-iso just actions
working out of the box
oh dang I need to try this.
if you checkout the homebrew branch. You can test with a simple:
It takes a hot minute to build the iso and do the install
https://github.com/ublue-os/bluefin/pull/1293
This is ready to be played with:
@j0rge
there is even a brewfile for all of the bluefin-cli stuff
is there a way to test this prior to merging?
ooooh
ok I'll check it out after work, need to wrap up stuff
no worries.
I have guards on doing the overlay mount if there is an existing homebrew dir.
I also have the brew update run only if it's a mutable homebrew install. brew upgrade will run for everyone
So basically everyone will now just have brew with completions out of thebox
with auto upgrade/updates
https://github.com/ublue-os/bluefin/pull/1293#issuecomment-2108748101
These are the current assumptions
note, we really should move the brew configurations in config to another file so we can mask it in bluefin
building
WHOA THIS UX!
the just commands?
or the brew just working?
still building
I love the output though, heh
| Install Terminal Bling w/ Brew |
need something slicker
@bketelsen brain
Enable terminal bling
?
Bling it up fo sho
Docker has a slick build output. Then if it fails dumps everything to stdout
error: Recipe
build
failed on line 40 with exit code 1which one?
just build-iso
?
this mediatek, on
just build
didn't get to the iso yet, or should I just do the ISO?
recommended some copyediting in the PRyou need the container before you can build the ISO.
I'm guessing that your container failed to build
trying again
I'm trying now.
it got past mediatek
onto kmods
transient error
i hate those
mine built
building ISO
awesome
That will take a hot minute
this is so awesome
feels good to use my machine instead of waiting for a builder
I said improve the dev experience on that pr
You can also spit out an official iso with
just build-iso-ghcr
But honestly downloading might be fasterit's shredding my IO, cpu not so much
For run-iso
Open a browser after running the command to localhost:8006
novnc to the vm
I have a reason to upgrade!
Same. I realized my laptop was marginally faster than my desktop
Well not really
just run-iso
doesn't accept arrow keys as input
tried it in gnome-terminal too so it's not ptyxis
but wow, this is awesome workflow wise!Use the browser
Localhost port 8006
@j0rge
I haven't figured out a good way to open the browser from a dev-container yet
ok
THIS
IS
AWESOME
@Kyle Gospo YOU MUST HAVE THIS
@Noel YOU MUST HAVE THIS
I wish I had kyle's threadripper
like having a chainsaw for a processor
brrrrrr
and then when you make a change in github I just git pull and rerun the commands
ISO testing, ez pz
Yeah
3 just commands
done.
Is this just in Bluefin currently?
Yes
Where is the PR?
happy to check it out.
homebrew branch
Well the just commands are in main
Homebrew branch for testing homebrew on image
I'm on the flatpaks now
I should no-op brew update to make sure users don't accidentally turn off updates
I can test fresh setup
Yeah
what's the tldr for updates, I have anxiety now that
gts
is a thing
thank you for removing virt-manager from my lifewhich file is run-iso in?
Scripts run-iso
oh I see it.
I'm assuming I need to build the ISO first? lol
ok, so I'm in the homebrew branch in git
but it appears to have built main
might need a rundown on how this works ๐
Voice?
I have 5 minutes.
https://discord.com/channels/1072614816579063828/1238923088700379226/1239641047957770300
I followed this
yeah mine is just building
main
, no idea how to pass it to build the homebrew
branch, can't voice, watching 2 kids rnYeah, looks like there are a few kinks, but we can work through them pretty easily. 99% of the way there. This is awesome @M2
Are you not in homebrew branch @j0rge
yeah
Odd
Are you sure it's main?
this is so awesome, gamechanging though
Check if /var/home/linuxbrew is there
So dumb question, could we move this to a different branch and PR than the homebrew one?
It's already in main
Been there for a few days
So the just files are in main?
what do you mean by "are you sure it's main"?
Because I see the just files in Bluefin lol
As in I don't have any tagging of the iso for which branch it builds
Oh so in the ISO check for /var/home/linuxbrew?
It just builds the current checked out branch
Yeah
After install
Oh you are saying main like bluefin main.
Yeah
We have too many things called main my brain crapped out for a second.
I'm following better now.
retrying, but when I booted into it last time none of the brew stuff was there
So the only concern I have is that building the containers with docker does deviate from what we do when building in prod.
In my opinion, we should be building the containers using the build tool that is used in GitHub.
Which I thought was buildah or podman?
I don't know where we landed on if we are going to switch to docker? Or if the zstd chunking is only going to be a feature in podman?
Did you build the container in the homebrew branch?
What's a bit brain busting is build-container-installer uses docker.
yeah, I checked out the homebrew branch and I'm in that branch in git
Interesting. Try going through everything again?
yeah
I'll make a video or something tonight or tomorrow
this is so awesome
We do not match prod.
Currently we build images with podman 3.4 on Ubuntu VMs.
To get this PR working, I'm updating to podman 4.9.x using obs. It will not build with 3.4
For the ISO action that builds using docker.
Since everything works with docker and this was originally aimed at vscode + dev-container I chose docker since that is our out of box config on -dx
Yeah, I figured there was a reason.
I wonder if I can figure out who publishes our buildah action and figure out what's going on there.
we use the one from Red Hat right?
Using rootful podman is actually quite difficult on your local system
Correct. But it uses whatever is the system podman. RH does not package anything for Ubuntu
I considered changing the workflow to use docker but @Robert (p5) had a snippet for getting a newer podman from OBS.
This is what centos-bootc is doing
seems silly of us if we want podman to become the standard.
OpenSuse Build Service?
Yes
they build a .deb of 5.0 and Red Hat doesn't package it for ubuntu?
I think just getting the right eyes on this discussion may start a good conversation about how to generally improve things in GitHub Actions
https://github.com/containers/podman/discussions/22675
what a time to be alive.
I will get this in front of more eyes.
very nice job writing this up.
Please read it before sharing. Happy to change anything to make things more clear or add something else
I could recommend some formatting to make it more readable.
lemme take a crack at it, hold on
Additionally, the lack of easy to use way to interact with the rootful podman socket is a huge pain.
I ended up doing a docker-esque group with podman-remote. But ... Docker works
This is the main section which could make people go "Oh yeah... This is a problem"
With the recent release of the Bootc images comes the expectation for users to extend the images in their own CI environment but the incompatibilities with the older version of Podman building Fedora 40 images cause users to face problem after problem when customising images. ... only to find out it's an issue stemming from using an older Podman version, users are more inclined to switch to other ways of building their images, such as Docker or installing Podman from unofficial and untrusted locations.
@Robert (p5) https://hackmd.io/xvbScbvJS4K1siFWddtpiA
let me know if you can't see this.
I formatted it into sections and removed sections that I thought were redundant.
Red Hat is two separate words? I never knew that!
Thanks! Reading through, and it looks good
Yeah I'm actually confused by that now
Red Hat has always been 2 words? ๐
cool. I would update with those edits and then I will post this internally at Red Hat.
In my opinion, this is much easier to read sectioned out like this instead of a big block of text.
Of course HackMD doesn't copy the MarkDown content... Why would it?
Just formatting things in GitHub, then will let you know
weird...
are you viewing it in edit mode?
That would help... Now I am ๐
lol nice
Updated!
For build container and iso, I think I'll add the git branch name so it's a bit clearer on what we're building
Probably as a version tag
So bluefin-build:39-main
sgtm, yeah it built main again this time
looks like I missed a lot here
A lot of complaining about differences in Podman in CI and locally, and M2 working ISO magic to get local builds working
shipping brew on the install tho? that's kinda big
Oh yeah, and that. I forgot that's the purpose of this thread ๐คฆโโ๏ธ
PR is up, my biggest concern is handling upgrades so it's transparent
This thread got derailed.
I am excited to see this though!
yeah but also worth it because knowing this local dev story is so easy after actually trying it is worth it
Oh yeah
That too
but also yeah we can have the bluefin cli-like hookup right with brew.
Shipping brew out of the box will convert a lot of people from Mac IMO.
so it comes vanilla ootb, then you can BLING IT UP
Folks who are on the fence anyway.
Okay have most of the justfile stuff moved over to a new branch
Definitely need to no-op brew update.
Since the files on image are always dated with unix epoch, older files need to have higher priority and I don't think overlayfs works like that with the upper and lower distinction
On image need to remove this file:
https://github.com/Homebrew/brew/blob/4.2.21/Library/Homebrew/cmd/update.sh
GitHub
brew/Library/Homebrew/cmd/update.sh at 4.2.21 ยท Homebrew/brew
๐บ The missing package manager for macOS (or Linux) - Homebrew/brew
i friggin love linux:
--really-force
is my flag of the dayHomebrew
4.3.0
Today, Iโd like to announce Homebrew 4.3.0. The most significant changes since 4.2.0 are SBOM support, initial bottle attestation verification, new command analytics and uninstall autoremove by default.
all the cool sbom/attestation stuff is starting to land, perfect timing
i just booted the installer that I built locally and this is the future
with brew on disk?
idk the novnc window showed it booting now it's blank, but it's only been a few minutes
and now I have to go to $work stuff for a bit. I'll let it try to boot for a while and come back
send CTRL+ALT+Delete. I think the reboot after kargs is busted
Spam that button like 7 times and systemd will reboot the system
but I think that indicates we have an issue with hardware-setup-script
there is a button to send ctrl+alt+delete in the noVNC controls
also, there is a browser inside of vscode
so added a vscode task to open it
will see if I can have a task for all of the just commands
notice how it now has the git branch
there's no way to make the keyboard work on the boot screen right? I just have to wait for it to timeout?
seriously this is so amazing
the disk it creates is ephemeral, right? I don't have to worry about cleaning them up?
I used
just clean
in the rootjust clean will delete all transient files like ISOs and flatpak dep lists.
just clean-images will delete all build images from.
For the VM, it's running in a container with --rm so when the container exits the Virtual Disk will be deleted
if you are in the novnc, enter and arrow keys will work. Modifier keys need to use the on screen key presses
yeah, doing this inside of vscode is pretty cool
should I be using the devcontainer? Maybe that's why it's failing
I personally use a devcontainer
I haven't tried the VNC portion yet.
that makes it so sudo actually works.....
But I'm landing an improvement as part of the justfile improvement to actually be able to delete root owned files (like the ISO) inside of vscode. VScode prevents SUID binaries from running in the built in terminal
when did gitlens get soo good?
no idea, but it's honestly my preferred git easy mode right now
it used to suck a lot, but I have ignored it for X years where X > 2
https://github.com/ublue-os/bluefin/pull/1293/
Fully ready for review now.
For testing in the repo:
Connnect to the VM with noVNC on localhost:8006.
Also integrated are vscode tasks:
RunTask -> Connect to VMThat will open up a browser inside of vscode to port 8006. You may need to refresh. On first boot you may have to send CTRL+ALT+Delete to reboot the image after kargs are set (haven't figured out why the reboot hangs). Then create your user with the first time setup.
ujust bluefin-cli
to install all the brew blingShould the overlayfs stuff be ported to config?
I'm leaning no, since config seems to be more opt-in features
This is being added to bazzite as well in testing
well, it's saturday night
anyone up for finishing up bluefin-cli? I was thinking it should be a toggle like enabling DX so you can just bling on and off.
I think the toggles:
Eza for ls
ugrep for grep
Turn on zoxide
Turn on atuin
Starship is still at the system level.
I like ripgrep and fd, but neither are true replacements. I do find rg to actually work really well for what used to be a find into grep. Rg actually seems a bit better at doing find like things than fd at times as well but is not anywhere near the find syntax
I think just including fd is fine
the colors in the box feel nicer too, like more cappuchin?
We actually have locale properly set now so Unicode characters aren't as jacked up
So things like bat now work