801 Replies
cc @M2 @Robert (p5) @jeefy
ok, so if I had a time machine - I would go back and do this as the first solution. And also the actual reason is this feature is well beyond my skill to implement.
So thanks @M2 for leading this. I have some thoughts as to how we ended here. (If it works in practice that is, hah!)
Alright so the reason this is part of The Final Shape is that means the entire toolbox containers can be rebuilt from scratch, as reproduceable as possible. It's like the first time you tried docker and didn't know about volume mounts. Then you do stuff and when the container gets destroyed you get mad that you lost all that data. Then you realize you know enough to get into trouble but not the conceptual understanding of containers yet
And I think part of the reason we started is toolbox encourages you to use it to replace your CLI/dnf experience. Which makes a lot of sense.
So, I think it ends up encouraging pets. But most of us who work in containers don't do it that way, we declare instead.
This was excacerbated every time
podman
broke in Fedora, which happened enough times that people would be upset that they lost their pet container. In hindsight we could have been stronger in our recommendations, all containers are considered replaceable at any point, without losing data. This really annoyed me so I blogged this: https://www.ypsidanger.com/declaring-your-own-personal-distroboxes/
So quadlets presented us with an opportunity to make the toolboxes disposable and split out the config/data. Destroy them all, I can snap my fingers and they come up perfect. That's what we're shooting for because it's the correct way.That sounds useful
Pros have it all automated anyway right, they either have their own config management tool like ansible that sets it all up for them anyway, we build ours in github, etc.
Right
Then that removes the possibility of you being like "why does my computer never remember the packages I install", which would be a terrible outcome
so you just get it fresh, every day, built from scratch
and it removes a great deal of mirror issues people have since ghcr.io's mirror capacity is huge.
and you don't need to care about how slow dnf is anymore since it's just getting oci layers anyway, just like the OS.
and if you've got a homelab now your OS and workflow can be cached, served locally, etc. It ties everything together.
you'll of course still use dnf manually, it'll just be for "oh let me try this thing" and then if you wanna keep it, you can add it to the file. Then tomorrow the container is destroyed and rebuilt but you still have it, it's just in the container now.
Two main concerns:
- making that config management as fast and seamless as possible
- cleaning up dependencies from the config
1) is what we just finished, that's how I came to the conclusions I posted about
confused on #2, which dependencies and in what config? The list of packages?
It does sound like the more proper way to implement sustainable containers, but it is competing with "dnf install xyz" for ease and simplicity - all of the people willing and able to maintain a higher level of skill and effort already do. What is the just-finished solution?
The just finished solution (tldr of what I said above), they get rebuilt every day from scratch, the pet container is destroyed unless you declare the package in the file.
Sorry I am extra rambly tonight
Ah, I mean, what is the solution to: making it frictionless to declare packages in that file and remove them, which takes nearly zero skill or effort for someone who has never used declarative package management and is only comfortable with a traditional package manager? Rebuilding containers and separating temporary additions from permanent configs is clean, but it is an "MIT approach" (https://www.dreamsongs.com/RiseOfWorseIsBetter.html) which, historically, does not catch on. The people who already use Ansible and such, as you said, already do this - what it needs to achieve for wider adoption is parity with traditional package managers for ease, speed, and convenience of use, as well as ease of learning, which Ansible absolutely does not achieve
it's a text file, wanna see mine?
Yes
oh, we put the initial one in the template, heh:
https://github.com/ublue-os/bluefin/blob/main/usr/etc/distrobox/distrobox.ini
So - dependencies are cleaned up by rebuilding without the packages which pull them in?
if you mean "do you need to care about dependencies", it uses the same package manager. It'll pull in what you need by default.
I wanted to double check that one of the reasons for this is to eliminate the need for a dependency solver when removing packages because the containers are just rebuilt from the ground up with the minimum of what they need
no, the problem isn't package dependency, it's the fact that you've created a pet container
which is an anti pattern
because when the container goes away all the packages you installed are gone, and that sucks
Same thing, the way I was referring to it
I was confirming that detail about dependencies as a prelude to my main thought on that config file
If dependencies are managed, which it looks like they cleanly are, and all of the logic is outside of the config file so the config basically boils down to the one line of
#additional_packages="ansible lynx chromium neofetch cmus"
. . .
Then it should be trivial to write a dynamic command line utility to manage that one lineyeah
sure
however you interact with that file is whatever tools people use
technically you could build the thing with nix if you wanted to
like, however you wanna UX it, sure
It's a small detail which doesn't actually impact how the thing works, but if you want people to adopt the correct pattern, it has to be frictionless compared to what they are currently using, as well as providing significant upsides.
Note if you want to to use the assemble file, you won't use the quadlet.
You can put the assemble files items into the distrobox quadlet but it's inside the Exec line.
UX statistically and materially impacts how well something actually functions, when humans are involved. Apple's UX is measurably fundamentally terrible and anti-user, but for several interesting reasons, the simple factor that it is visually appealing to a certain subculture makes it more usable and functional than it should be
And software engineers never know or realize that because it's studied and taught exclusively in industrial engineering
It's a brilliant piece of software engineering to streamline that user-facing declarative config file down to a single line of packages in one string, because even I can write the utility I think that needs to provide UX people will latch onto
Note that additional packages line will require the Internet to use.
I like this pattern and appreciate that you brought it to my attention. I would like to learn more about it and will think about prototyping that configuration tool. I think it would just be a shell script with two args and one elif/switch statement with three options
yeah, I mostly just switched and it removes caring about that stuff ever again, so problem solved for Bluefin, it'll be the default, everywhere else opt-in
just assemble
in bluefin will replicate how it works
so if you are on say, bazzite, you would run that instead of update for your containersThere is one advanced feature which I think would be useful but trickier: a log of temporary installations done with the package manager, so that one does not need to remember what all they added/tried while getting something to work
Or potentially a way to clone the current, temporary container
actually that's how it should be worded, It's "your toolboxes are moving out of
just update
and into just assemble
as a different step, but the real answer is just assemble
should be the way you update toolboxes anyway. You just decide you don't want pet containers.My thought process there is that sometimes I need a package for a day and then want it cleaned up, and I want a stable, clean configuration which stays like that, but I also want to add a number of packages for as long as I am working on a project
if you want a pet we won't kill your pet, it's just not the default (in Bluefin, for others reading this thread, we don't put opinion in the base images)
see, isn't it nice that we don't put opinions in main?
I believe I should note and gently object to the moral judgment you are attaching to the concept of "pet containers." Some people maintain and cultivate singular containers which are not easily replaced because they want to. I doubt this is a majority. I suspect most people end up creating what appear to be "pet containers" because they need to install packages to do their tasks or achieve their goals, and it becomes easier to accumulate those packages than to restart from scratch and figure out what was actually still critical. The solution is not a moral judgment of these people and a moral decision by them to do better - it is to create better tools which comfortably and more effectively manage, clean, divide, and reproduce containers according to the better pattern
For my use cases, I believe such tools should allow me to:
- Declare a minimalist "base" configuration which a permanent container is built from and reset to when regularly rebuilt
- Install additional temporary packages using traditional package managers or building from source
- Dynamically add and remove packages from each configuration
- Dynamically add and remove configurations derived from the base configuration and temporary packages
I don't think that last point is currently supported, and I think it is critical for certain project-based work
building from source shouldn't be affected, all that stuff is in the home directory already
like if you installed to ~./local/bin/blah it'll work fine, it's never been a problem
Yeah
it's not a moral judgement at all. It's just a clear requirement for people who opt in to Bluefin because we recognize the most efficient path. The one that's been proven in practice for a long time and then applied to desktop to remove package jank from our lives? Yep, that's the default. Easy win for our target audience.
Makes sense for Bluefin, yeah
like, for us it's a clear default pattern
dang this conversation isn't indexed yet, I wanna tweet all of this lol
Also I should add
we're talking about the default system containers
whatever you make we don't touch
we don't destroy all the containers, we only destroy the default CLI containers.
I mean, yeah if that wasn't obvious, you always, always have the option to make pets, that's why they're called pets, you just have to make sure it's something you're willing to feed by hand.
Hmm, I see
I suppose I could rephrase some of what I have been trying to say as "I don't want to have to rely on "pet" containers, and I am considering ways to extend this solution for default system containers to also conveniently support all of my other containers"
yeah, it means when you just know the default container, aka
bluefin-cli
is always this way, if you want to extend it, add it to the file, if not, it's one command to clone it as mything
and then you'll just use that as your default in PromptGotcha
and then when you want a fresh reinstall feeling you just rinse and repeat that as often or as little as you want
Right
I know this should be fairly straightforward to achieve via copy/paste and tweaking in the declarative config file, but is there already a command to clone the config?
Hrm, no that wording doesn't quite capture what I'm trying to ask or describe...
yeah, some variant of
just distrobox-blah
you don't clone the config
it's on your host, /etc/distrobox.d/distrobox.ini
or something like that
you just set it to not destroy itHmm
replace=true
-> replace=false
for that container in the distrobox.iniGotcha
or actually you could just set that,
just assemble
, and never have to deal with it again
actually, if we just title the announcement "system containers will default to replace=true" then it doesn't sound so scary because it's not as scary as it sounds.
whereas "WE WILL MURDER YOUR BEAGLES" might not be the most user friendly approach.Heh, yeah
My thoughts right now:
Quadlets are cool. They work better than just doing a distrobox create command since it will only pull if a new container is available and you have Internet. Setting pull to always on distrobox means you constantly have to flip the switch back and forth while quadlets side step it completely.
Entry into the container. Prompt doesn't always pickup that the new container is available. I sometimes have to close all of my terminals for it to get picked up. But then it seems pretty solid. I want to compare the one-shots vs quadlets for how fast prompt picks them up but this might be a non-issue when they are started with the session automatically.
State management: quadlets are basically stateless meaning that any package installed into them is gone on container destruction. This is great for having a consistent state but might be a bit of a paradigm shift for people. This also might necessitate having all distrobox packages installs done in GitHub to avoid apt/dnf etc commands from running and slowing things down. I want to explore toolbox a bit more since it doesn't seem to do the same at all.
Pattern extension: quadlets have a few key/values that you would need to change between images. The bigger challenge is actually making an image that is getting updated as necessary. The box kit idea is great for this but is still a startup cost. The systemd-oneshot services are in there to just rely on the tooling that distrobox provides via the assemble file. You could also just use a quadlet with a local image as well if you want/need a quick custom image but still stay in this style. However if you are building an image you probably are already comfortable with using podman directly or will use distrobox/toolbox and have state accumulate.
But I think the one thing I want people to understand right now is that assemble and quadlets are different ways of tackling this problem.
I also have yet to look at how prompt handles container profiles.
The most annoying for distrobox/toolbox right now is that with no init system we have to rely on /etc/profile.d to do actions in the container after build time and the quadlets don't give us shell manipulation.
But I want to play with the toolbox quadlet more today. If it works how it seems to work I might vote that be our preconfigured containers default if host-spawn is working right
I've patched distrobox-init to work better with Wolfi.
I'm also starting to think that we should have an additional label for quadlet/systemd managed distroboxes. Basically tell distrobox to not do upgrade, list, and whatnot on them. If systemd/quadlet is managing the removal and new build pull of them. The toolbox/distrobox management tools should then ignore them.
I think easiest is going to just be something like an additional label and that can be easily added to distrobox at container create or in the quadlet using a label command.
great idea on the label
@j0rge I've solved the whole toolbox list issue. And understand better what distrobox is doing for detection.
toolbox list --containaers provides a list with all containers with the following label "com.github.containers.toolbox": "true". If the image doesn't have this label, toolbox will add it as part of the create step.
Meanwhile, distrobox only displays a container if it has a mount with distrobox* in it.
We remove this label from our boxes.
For distrobox, we can control the mounts in a quadlet. If using an assemble file, we need to add some sort of label support or other way to identify a don't manage these boxes.
Adding this here: https://universal-blue.discourse.group/t/prototyping-the-bluefin-cli-experience/469
@M2 @Kyle Gospo @Gerblesh ok so I'm on the road and on awful wifi. Would tying in the quadlet into ublue-update/topgrade make sense?
the reason I bring this up is I booted but didn't have network because there's a captive portal
then I sign into wifi
and can't go into my toolbox because it's been destroyed
I wonder if
ujust update
can also run the quadlet and "do it's thing" in addition to onboot so I don't have to remember how to do the system service restart part
(just spitballing some ideas here)so
ublue-update
/topgrade
would run in a container?no, they'd restart the quadlet service on the host
topgrade just like ublue-update can run any bash script you'd like
oh
so we just need a working proof of concept
oh baller
I would be in the container when I call it but M2 made it so transparent that I can just do
ujust update
in a container and the system does the right thingdistrobox-init would need a patch to handle the no internet situation. Otherwise, when it tries and do the package checks it will fail. I'm possibly looking at stripping out all of the upgrade logic inside of distrobox-init for these quadlets (so distrobox-init-quadlet). The container should be able to run without the internet.
I'm possibly leaning towards us not having the autoupdate key in the quadlet and using topgrade to call a bash script that does a systemctl start on a script/service to restart the container. The one level of indirection could mean the user could turn off auto updates by masking the service without modifying topgrade config.
so you're saying, "have ublue-update/topgrade drive?"
I've been thinking, if we centralize everything around topgrade then it'll make "I'm going on a trip I need to shut things off in one place" easier?
yeah. The autoupdate seems to work fine for me as a user who generally speaking has interent when they log into their computer. And will grab the updates each time my computer logs in and out.
But the autoupdate is a label so kinda something that you cannot dynamically turn on or off. Say you are going on a trip and are like my computer will not turn off for the next X days. With something else you could turn off those udpates easily.
right
ok so I see what you mean
"don't autoupdate via the label, leave that off, let ublue-update/topgrade handle the logic"?
yepp.
this feels like This is The Way
but big issue right now for quadlet is that distrobox hard relies on a connection on first run.
and naivley commenting out all the package stuff works but would remove the additional_packages functionality. I can probably do some sort of condition check before entering the dependencies check. But we could possible just do this comment hack job to get this working. This also means startup is basically instant since the slowest thing was individualy checking for dependencies on first run
bluefin-quadlet was up in less than a second
Will play with this a bit. But since we put /usr/bin/entrypoint into the bluefin-cli right now, we could just do this for quadlet style. When using distrobox-create it would mount over it.
Dumb idea incoming
Can we skip the destroy on boot if there's no network?
Oh actually NM, you mentioned removing the boot replace entirely right?
No. We cannot remove that. Quadlet is using podman run commands with --rm flag. When the service stops all of the state of that container is gone. The exec stop is literally making sure it's gone.
What I'm looking at is stripping out the network required items in distrobox-init.
The container is destroyed when your session is over so it's gone at shutdown.
At startup it effectively does distrobox-create and distrobox-enter at same time.
If you notice in the Wolfi toolbox, we have all of the distrobox tools added into it. So the quadlet isn't actually using the hosts distrobox-init, distrobox-export, and distrobox-host-exec
ah ok
Ok more on quadlets
Alright so in order to support the "captive portal" or offline use case M2 thinks we can work around it, but currently that involves losing the additional_packages functionality, which we need.
@EyeCantCU https://github.com/ublue-os/toolboxes/blob/main/toolboxes/bluefin-cli/files/etc/profile.d/command-not-found-host-exec.sh here it is
I think the smartest thing would be, if Internet you can do additional packages, if no Internet no additional packages.
I'm not sure if that would surprise a user or not?
I also could see not having additional_packages would encourage the boxkit approach instead or keep using something like brew which maintains it state otherwise
I personally will use the boxkit approach since I have multiple computers, but I can see that annoying people.
so let's say I'm on the road, no internet, no additional_packages.
I get to the hotel, internet, if I
just bluefin-cli
or there's an update, would it rebuild with additional_packages or are you saying we'd remove that entirely?Bluefin-cli would be outside the quadlet style and that would work fine.
Distrobox-create and enter are used with the just command.
Distrobox-create volume mounts the hosts distrobox-init, distrobox-host-exec, and distrobox-export into the container. So they would have additional_packages if they so choose.
For the quadlet we currently have the distrobox stuff included during build time meaning we could patch them.
oh ok so are you saying we could add additional_packages there too?
Yes,
ujust bluefin-cli
currently does a distrobox create command. This could instead be a distrobox assemble command to have an easy to access additional_packages. In order to create the container the first time you would need Internet.
For the quadlet right now we want it to autostart. Since container doesn't exist, the default distrobox-init requires Internet in order to startup. So we could modify it to check for Internet before attempting to install packages
But that would mean if there is no Internet the additional_packages wouldn't get installed since it needs the Internet to fetch and install them.right
well, that seems to be the least worst option for now
wait, now I'm confused, sec. Let me reread again
ok so question
couldn't we do a assemble first as part of first user setup then just hand it over to the quadlet?
that way subsequent boots without internet should still work right?
Nope. Quadlet does a podman run command not a podman exec
Specifically it does a
podman run --rm
Followed by 2
podman rm
commands
While distrobox enter is doing a podman start
and podman exec
command on an existing container.
Quadlet fully removes the container.Ahhhhh
ok so for bluefin-cli you're thinking assemble all the way then and let ublue-update handle the rest?
I prefer the quadlet approach, but if you absolutely need something else or don't want to go with quadlet, assemble.
Ublue-update handles updates for both styles. For quadlets it's restart the service and container is built on new image, for assemble it's no different from current situation. Maybe have an option to do a distrobox assemble create --replace to more mimic quadlet
ok so if we want to stay with the quadlet approach, what do you want to adjust?
sorry I may be asking the same questions over and over, I'm a little out of my league here, heh
Depending on what you decide is best, config is setup for distrobox assemble, any needed packages could be added to the .ini there and it'd apply for anyone wanting to setup bluefin-cli
I'm pretty new to quadlets but I can dig up more info to see if there's anything that could/should be adjusted for this
yeah and also as I think about it.
toolbox was "1.0" of this right
distrobox was "2.0"
but each of them were around "how do I manage adhoc containers?"
I think if we look at the toolbox repo as "3.0", where each of them behave similarly, whatever you choose it'll be auto managed for you and will have a method for you to declare your packages. The system should take care of everything else.
Exactly
I saw this comment on toolboxes that I can't find that surprised me, one sec
so I asked on mastodon why people think flatpak CLIs are a good idea and a lot of it is basically "the default toolbox experience sucks"
note that josh can easily deploy and maintain kubernetes clusters. 😄
and someone also pointed out that by default the fedora toolbox doesn't update or anything
they just set you up with it and then you're on your own
there's no equivalent to
distrobox -a
in toolbox (seriously? lol)
and third point, Prompt isn't anywhere except in a gnome-nightly or on a ublue image, so they're all forming opinions on the UX from a year ago.Saw it and read the replies earlier today. Wasn't aware of toolbox lacking a way to update things. Speaking of which... I don't think it supports Wolfi
I'll have to PR that today
it does not, I think if it did it would solve problems
for toolbox?
I doubt they'd accept it.
Nah. Distrobox support for Wolfi
I mean, 90% of the problems we're trying to fix is "toolbox doesn't want user input, they want to provide a UX and have the user use it the way toolbox intends."
I don't like that model because no one wants to update 50 operating systems haha
Or well... Containers in this case but still
it's like "the person who makes Prompt and the person who makes toolbox work for the same company" and we're the ones figuring out how to glue them together. I don't understand why internal comms is so hard for RH, but you know, big companies lol.
Does Topgrade support distrobox ootb or does that need scripts? Can't remember offhand
Would partially work around that limitation
I am insanely curious how much smaller the disk usage will be if we remove a bunch of the stuff distrobox auto installs. I suspect we'll pull in stuff anyway, but I just gotta know.
topgrade will do both
Wanna know too
Oh that's awesome!
I was curious about that because if it didn't I was going to add it
yeah so if you run it, it will apk upgrade bluefin-cli, and then later also repull bluefin-cli, I think we want to skip the inplace and just do the repull
That would be optimal
@EyeCantCU when you get around to it note the disk space differences, that'll be juicy clickbait. 😄
only reason we don't have toolbox support in Wolfi right now appears to be lack of shadow-utils and by extension a pam stack.
Toolbox has most of if not the same package requirements as distrobox, just distrobox will make sure they are installed while toolbox on the other hand has curated images to work with it. I think we could probably remove a lot of the stuff and be fine, but toolbox still wants those half GB images as well.
For topgrade, it will update both toolboxes and distroboxes. For our quadlets I would say we should have them be skipped and instead rely on the new images.
For things managed via distrobox-assemble, I think the ini file makes the most sense and just have them get reassembled via a timer. Topgrade will fetch the newest podman images and then when the timer fires, the new container gets built from it. The downside for distrobox-assemble is you need interent for all of those features and well it will make sure all of those convenience packages are there.
I think the long term play with these managed containers (at least with bluefin)
Quadlet:
You want brew to grab packages as you need them. This is for cli stuff but your only state is what is in $HOME (and brew)
You want to do customizations. Go boxkit route and aim your quadlet at your customized image. You will have a larger download, but your packages are not in brew but being tied to your actual container.
Assemble:
You don't want to go all in on quadlet. But you see the issues with letting a container just languish around. So you go in on having the container be recreated periodically. You may want to extend the image using additional_packages or other modifications, but don't want to commit to making your own image via boxkit
| shadow-utils and by extension a pam stack.
@EyeCantCU do you know if these are coming to wolfi? I know that's a complicated question. 😄
Definitely will!
I'm unsure about either a PAM stack or shadow-utils landing in Wolfi but it's something I can ask about and explore implementing
For quadlets + topgrade, are you thinking to have a systemd service for updating the container and start it as a custom command in topgrade? Or have topgrade run
distrobox update
directly? (hope it's okay for me to pop in here since I'm playing with the same stuff in parallel)quadlets+topgrade, topgrade can run an arbitary command and that command would just be a
systemctl --user restart name-of-container.service
distrobox upgrade would like to avoid for quadlet since that mean the container deviates from the image.Ohh just a restart because topgrade already updates all images
Are you sure you want topgrade to restart containers? What if auto runs in the background and kills a container I'm using?
What I'm thinking is have an intermediary service, if you don't wan the auto upgrade you can mask the service
Okay, nice. Then you just have the updated image queued up for the next restart
I would like topgrade to update homebrew in bluefin-cli.
The naive no internet situation for bluefin-cli is up
Got to be a way to silence this:
Lol. Just commented it out. No option provided to make it quiet so it was printing every time I entered the container
We can pass the value used for LANG from the host as well... but that requires another change to distrobox
Here we go:
Hmm...
Oh wait. Nvm. I see why that is
Pull request up
https://github.com/ublue-os/toolboxes/pull/35
GitHub
feat(bluefin-cli): Enable atuin and ble.sh out of the box by EyeCan...
Retrieves and extracts ble.sh. Enables both atuin and ble.sh via the bashrc
ble.sh can be packaged but I think it's silly to package a bunch of shell
@EyeCantCU those symlinks were already made in wolfi-tollbox
oh, I almost approved just now, I can just wait?
My bad. My apologies. Didn't realize I had copied those from earlier
and bsherman finished the motd just now
now we can wire it all up
And we ship the /etc/bashrc so put those lines in the file
Instead of echoing from the Containefile
Especially since those echos would be outside the double source guard
Will do. Thanks for catching that!
That's awesome!
@j0rge @M2 PR updated 🙂
oh, conflict
I think I merged somethign ahead of you, I apologize for that!
No problem, I can rebase
All right, it's good to go
merged!
Excited to mess with ble.sh and atuin
GitHub
atuin: ble exits with an error · Issue #36 · ublue-os/toolboxes
ble/term.sh: updating tput cache for TERM=xterm-256color... done error: invalid value '/bin/bash' for '' [possible values: zsh, bash, fish, nu]
I get this
Already fixed haha
Caught it immediately after
Atuin is absolutely awesome. So much better
AMAZING
this is soooo good!
amazing
I don't get the motd but I can investigate later
I think the bleh stuff could probably be turned down a notch for a default?
I'm thinking it should feel like a modern version of what you're used to and not like a total jump. What do you think of that?
I agree. It's a bit much haha
Motd would be coming from the host. We don't have anything inside the container yet.
I don't get it there either
but will investigate tomorrow, I'm hitting the sack, gnight!
Gn!
You guys ever see this?:
Error: OCI runtime error: crun: ptsname: Inappropriate ioctl for device
I get it after a cold boot entering bluefin-cliYes. It seems to be something random that occurs with Wolfi.
I find that first entry works pretty well. But after that repeatedly trying to restart it can be painful
yeah I open it again and sometimes it sticks, was just coming in here to mention that
after it spits things out to the console it's fixed
Definitely specific to distrobox. I run the base and SDK containers via podman and docker all the time and haven't ever seen this before
yeah
for prompt, is there an escape sequence to put the scarf on the tab?
Unsure about that but it looks like BusyBox in Wolfi provides a lot of what's in shadow utils already
Not getting that error anymore after modifying the deps for what Wolfi has. One thing though... It looks like distrobox depends on a lot of what's there. Not sure how much we can slim this list down
Spoke too soon
I PRed in gawk just now btw
(for atuin/ble.sh)
okay. Instead of ble.sh, we can seemingly use bash-prexec and not have all of the blingy stuff
and that fixes tab-completion for things coming from the host
@EyeCantCU the issue is that busybox's implementation of shadow utils is slightly different. And some of the things are only in shadow like usermod, useradd, etc...
additionally I just replaced ble.sh with bash-prexec since ble.sh was getting a bit intrusive and broke tab-completion for the stuff on the host.
Sounds great. Definitely a bit much. I'll look into packaging it today. Shouldn't be too bad
yeah, building shadow-utils wasn't bad but the default pam stuff is not-usable since it goes back to old school requirements like password length being between 5-8 characters
Hmm... That is tricky
yepp. Basically until there is a reasonable pam-stack, shadow's passwd will cause issues. And busybox passwd is missing things that toolbox expects
so I need to put a better guard on the .bashrc.d file right now. Since our setup will result in it creating a distrobox if the expected one isn't there.
And that would be an ubuntu box on bluefin and unsure on the rest (whatever we set in /etc/distrobox.conf)
that's been a problem since day 0 anyway, heh
if I fat finger something I get an ubuntu container
okay. I will need to move this crazy long just command to be it's own script instead. But there are definitely a few improvements that I can make to it right now. The zz-container.sh thing mostly just needs to check to make sure container actually exists before trying to run exec distrobox-enter.
but inital wire up is going on.
This looks great! Nice work
Select Quadlet for automatic management or Distrobox for manual management.
maybe?
See <link> for more perhaps?Yeah link would make a lot of sense
also, if you want to split it out before I do the wording we can do it, which ever works for you.
I won't be able to get to it tonight.
You have a link to the source?
Link our repo? I think the manpage for quadlets are a bit dense
printf "Finished Bluefin container setup, run with ujust bluefin-cli
to reconfigure\n"
is that a valid use of `?GitHub
bluefin/just/custom.just at main · ublue-os/bluefin
An interpretation of the Ubuntu spirit built on Fedora technology - ublue-os/bluefin
is the file we're looking at rj
Oh this is new to me. The other day I looked and it was just a distrobox create blah
yeah so what we want to do is
when you login, the motd fires off
(that's done, looking awesome thanks to the work last night by everyone)
and then we say "do
ujust bluefin-cli
to configure your terminal"
then you can select "Host", which means they just want a host terminal
bluefin-cli (default): which will set up the bluefin-cli container
and then ubuntu and fedora
basically, give them 4 choices
but m2 is thinking a seperate script instead of trying to inline it in the justfileAbsolutely awesome idea. A separate script for that much code is the way to go IMO
So, right now for the container as default, only your first terminal will log into the container.
Just trying to figure out a way get a host terminal even when on container by default.
I think maybe we can do like a preconfigured profile with Prompt?
where we can maybe just by default open a terminal with a pinned host + a bluefin container tab?
Maybe? For the profiles can you specify a custom command but you won't have the scarf unless there is some sort of escape sequence to set it.
welp, lots of progress this weekend so let's let it bake and play with it.
❤️
ok with the latest stuff just merged
seems like it made "bluefin-dx-cli" even though I picked bluefin-cli
and it doesn't seem to create on boot?
I chose quadlet for the method
check to see if you got bluefin-dx-cli.target enabled?
just curious on what occurred?
doesn't exist
interesting. I'll troubleshoot this. I will probably remove the -dx ask.
If the container already exists as a quadlet it won't recreate it
yeah we don't need to put every container in there
otherwise people might be like "If I am on a dx image do I want a -dx toolbox?"
yeah, I can remove the -dx from targets in the bluefin-cli.sh script and the second do you want this.
I'll leave the targets so if someone wants to use it they already there
nod
@M2 I'm starting to think long term the play might be for distrobox to just have:
distrobox manage ubuntu
and then it quadlets up everythingthat would be cool, just would only work with podman
nod
I was thinking about
distrobox assemble generate foo
that would generate the assemble file from an existing unmanaged container
Then one could add a distrobox generate systemd
or something on those lines, that would then create a unit file to recreate it at boot
This would make it compatible with docker, podman, lilipod (and future other container managers)yeah that sounds awesome
If anyone here is interested (I'm a bit busy with $day_job now) we could open a discussion or a feature request to discuss what it would be nice-to-have
or i'll forget everything by tomorrow lol
this would also be useful to do "snapshots" of a container, so the assemble file would recreate it as it was in a specific time
I would like to do all those things, but let's get you out of the hole first, then we can maybe start discussions in the upstream issue tracker
yea it was only so I don't forget about the idea
we capture everything in these threads and archive them so we'll be good to go
Since everything used to create the container is in the podman inspect for the existing container, this would be parsing. Are we limited to shell or could we use jq for this functionality?
Generate systemd sounds great and should be a straightforward implenentation. Should just need the container name for making the unit file.
The idea behind an
assemble generate
would be to "save/snapshot" an unmanaged container, so one where you manually installed/removed stuff, so only parsing the inspect won't be enough
but yes, ideally I'd use a go-template string for the --format
flag of podman inspect
(or docker)
Jq or Yq (for yaml) are useful, but that would become an external dependency of the tool, which would be a first, as for now dbox only needs a posix shell and a container managerAh, that would be quite a bit more involved for the generate command.
I didn't know this was possible
Hey ...
you don't think we could just ship bluefin-cli.container like this and then skip all of the updating, service units, etc. It would just come with the base image?
yeah quadlets are great
We're kinda doing this.
For getting the quadlets onto disk inside our containerfile we have a fetch quadlets script that gets the copied from toolboxes repo into /usr/etc and we rely on systemd to get them in place. Then the quadlet generator turns this into our service file. So we change in repo and that filters to the user.
For the additional items, we have actual user so can't use sysusers, but we have distrobox tooling for creating that user on startup.
$HOME data is also handled via distrobox.
So, I think we are kinda doing this. This is why init at times might be helpful since we can do a more structured startup compared to sourcing bash scripts
@M2 filed upstream: https://github.com/89luca89/distrobox/issues/1189
GitHub
[Discussion] Service units and assemble for managed distroboxes · I...
Is your feature request related to a problem? Please describe. For projectbluefin.io we're shooting for a "managed toolbox" experience. We'd like the container to be set up for th...
quadlets are going to be a podman only thing. So I think the making service units with assemble files would be a more welcome idea for distrobox.
nod
if I do a reboot and my bluefin-cli container doesn't come up, what should I look for?
check status of bluefin-cli.service
That is the generated service unit for the quadlet.
oh interesting, not found
that's concerning.
is your quadlet in ~/.config/containers/systemd or are you now using the one from the image?
since if there is a typo/error in the quadlet it will not generate into a service unit
I believe on this box we did it manually before it was on the image
iir I manually copied these in here, should I blow them away?
yeah, you can blow them away. Make sure you have an actual quadlet in /etc/containers/systemd/users/ first though
but they've been working for me
though I normally access them via the prompt menu
So I figured out how to do something similiar inside of WezTerm but prompt is much, much more accessible
ah there they are!
oh beautiful!
everything came up correctly on boot. ❤️ ❤️ ❤️
do you think I should make a video?
I think it's ready. Only changes that would be left is making a prettier UI.
And if a person wants to modify the quadlet, all they have to do is copy from /etc/containers/systemd/user/ to their personal one in ~/.config/containers/systemd/
And a pristine copy is always kept in /usr/etct/containers/systemd/user/
Came here just now to ask about that
@j0rge
\o/
Exciting.
Couldn't wait to have my Zsh plugins working under Wolfi lol. Package needs some cleanup and post install the prompt is a mess so I'll have to tweak that as well but it does build now 🙂
and fish is just one rust binary now right?
All too easy</vader>
I gotta play more with this Wolfi box soon, very exciting stuff
You're gonna love it
I'll have to get it too 🙂
Are you using melange to locally build and add it?
Yep
If you find yourself wanting something and Alpine has it -
melange convert apkbuild PACKAGE
Will get you half way there most of the time
I think I've wrapped Zsh up excluding sub packages for completions (pretty much all of these are for things we don't have anyway)We actually still want those for doing host completions.
For bash I bind mount the hosts bash-completions directory and source them so that tab complete works with them once you type out a binary name.
That's how ujust tab completion is working
Then again, we can just do the same method with using the hosts
Can certainly do that. Gonna be a lot of completions packages lol
Does melange handle sub packages well?
We can still rely on a lot of things from the host by sourcing the hosts zsh completions as well, but anything that is normally installed with apk is going to need stuff.
Yep! It's fantastic for that. It's so cool I can do all of these conpletions with one sub package entry using a range 🙂
I'll link you when I'm done
It's literally:
All I have to do is create the directory, then copy completions over to the sub package directory. It's brilliant
What the...
py3-snowballstemmer
All right we've got fishWoooooo!
Also I'm going to get clarification on handling those Zsh completions
Everyone's doing it differently
Oh my gosh, that's funny.
bc
landed in Wolfi 5 hours ago and I just used it for fishHmm... I haven't been able to find either function Mitre mentions here:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2007-1397
Probably a false positive
CVE -
CVE-2007-1397
The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities.
This is the list I've got to meet current distrobox dependencies with wolfi:
Unfortunately, some of these (like
xauth
) rely on their own list of dependencies. Ultimately, this is what I end up with
This doesn't include shadow (only needed for useradd) and pinentry as those are not in wolfi. I will need to remove those from the missing dependency check when the container is derived from wolfiwhat do we use xauth for?
I was hoping since there's really no gui apps in wolfi that we could drop a bunch of those
I'm going off of the built in dependency check in distrobox - I'd assume those are accurate, but I will double check to make sure
From a quick search, nothing lol. I will revisit these one by one. Not sure why they are listed as deps
still looks smaller than what we have now
ghcr.io/ublue-os/wolfi-toolbox latest ccee5d474505 3 hours ago 109 MB
that's pretty stellarYeah, it's close. I should be able to get a lot of these out. Looking over distrobox itself, a lot of them look like they're for convenience
yeah, if it's CLI only my hope is that that would pay dividends
also, making sudo-rs work is in bluefin-cli only, that should probably be moved to wolfi-toolbox so it's there for everyone
I don't expect zsh and fish to be that large either, so this should end up being the perfect distrobox base image
Oh for sure. I use it every day as is
Down to wolfi-base, findmnt, BusyBox, coreutils, bash, and curl
I want to carry this change to Alpine as well
Installing basic packages... Error: An error occurred
Missing something
Mount, unmount...I maintain the alpine toolbox in the upstream toolbox images repo
actually, we should move wolfi there since it's where all the toolboxes live
https://github.com/toolbx-images/images they have the CI already set up too
until shadow exists, it won't work
(once we have everything we need)
also, I think i've figured out how to hide toolboxes and distroboxes
dang this terminal setup feels so good
This is The Way
@Kyle Gospo you need to get in on this
with the transparent host passthru
dysk is my jam
Two PRs opened to distrobox 🙂
I REALLY don't want to jinx it but post package cleanup, I am not seeing that ptsname error anymore
how much smaller is the newer image?
Will test here in a few minutes 🙂
@j0rge 🙂
OMG!
Pretty crazy
Hey @EyeCantCU saw the PRs
I'm not sure about these, removing the packages from alpine/wolfi would make them smaller, but then it would make it behave differently than other boxes
The point of dbox is having a reasonably useful userland, with GPU accelleration, with all the various basic tools (ssh, stuff like that)
Even with the regular setup the alpines are already smaller than the rest
it could be of use having a "distrobox create --minimal-distro" maybe, that just does the bare minimal for everything to work
But also I want to be careful with new flags, as it might always go way out of hand and become a flag-fest 👀
Thank you for looking over them! I appreciate it 🙂
I totally understand where you're coming from. I do have one question though. If a distro doesn't provide a dependency in the
dependencies
list, an upgrade will trigger every time the distrobox is initialized, right? The reason I ask is because Wolfi doesn't provide a few of these. I.E. useradd
from shadow, whatever script
is from (haven't traced it down yet), and pinentry
so I've noticed the initialization step runs through everything all over again a lot of the time. It also sounds like getting shadow or just a general PAM stack in Wolfi may take time (packaging it isn't enough from what I've heard unfortunately)
When I was writing this up I thought a flag for this may be a great idea. I wouldn't mind implementing that though I understand the concern lol
It's also to my understanding that we don't have a full mesa
stack quite yet, though some of it's there
And our Xorg stack needs a little work but I digress 🙂It actually shouldn't! there is a flagfile in
/.containersetupdone
which should skip this stuff once it is successfully entered the first time https://github.com/89luca89/distrobox/blob/main/distrobox-init#L391 <- hereGitHub
distrobox/distrobox-init at main · 89luca89/distrobox
Use any linux distribution inside your terminal. Enable both backward and forward compatibility with software and freedom to use whatever distribution you’re more comfortable with. Mirror available...
if it doesn't then that another bug 👀
Oof. I'll have a closer look when I wrap work up later today
Container setup done is only for the package manager stuff right?
Yes, skips only that part
@Kyle Gospo is making an opensuse rpm of Prompt doable? Poor @89luca89 hasn't used Prompt yet.
Let me look into it today, needed to update prompt anyway
oh I was wondering if it was just "check a box in copr"
Have to patch and replace one gnome package
I'll try in a VM shortly
I'm going to open a PR on Wolfi today 🙂
you think that'll be useful for rich and co?
I wasn't aware that prompt was still stuck on fedora hosts
I'll test prompt in a VM so I can also understand how it enters a distrobox container
Because if it doesn't call distrobox... well it's losing a lot of various integration things along the road
the setup m2 just added last night calls distrobox enter? I see it in my config, that's what it's supposed to look like right now right?
as far as I can tell it tries and figures out what the run command is. For the profiles I've made it uses distrobox enter since there is some weirdness with prompt detecting systemd created podman containers
For how prompt apparently enters: It looks for the [ CMD ] for the container and will then execute that. It can also handle if an entrypoint is set and will use provided create time CMD line.
So it seems to use distrobox-init as /usr/bin/entrypoint as the entrypoint but it might miss things done by distrobox enter from outside the container.
Though christian seems to have made mention on treating those differently
GitHub
linux-pam: add missing configs to linux-pam package, fix broken su ...
Fixes:
Right now the /etc/pam.d/login file provided by linux-pam package, refers to non-existend profiles, this also breaks su if you install linux-pam
Adding the minimal pam configs to make them w...
"hi luca we can help with distrobox!" "hang tight kids, let me fix your stuff first." lol
this is also important for the next release of distrobox, that will use
su
to do login shells+ttyoh that, would fix things
Yea, a previous PR did a great work at rewriting a bit the start/enter thing by using
su $USER
instead of using podman's --user $USER
, this allows for a proper login (and proper seat from logind in an init container), proper shell initialization and so on
Problem now is that su
does not instantiate correctly a tty
, so you need runuser
or su --pty
, both of which need to be the non-busybox version
If you install the util-linux-login
package (both in Alpine and Wolfi) it will pull linux-pam
as non-busybox's su
uses pam
to authenticate stuff
And linux-pam
right now breaks everything because its not properly configured, so the PR fixes this, and will allow also a better experience with Wolfi+DistroboxWhy have I not been using distrobox-ephermal more?!?!!?!
that's how I test everything
this is a power up. I love it.
YOU ARE THE GOAT! I will make sure this gets eyes
Next up. Shadow-utils!
they break with pam?
wolfi doesn't have shadow at all
ah I see
Will be another project this week...
https://git.alpinelinux.org/aports/tree/community/shadow/APKBUILD
Let me know when you do it, we will have to add also the pam stuff for shadowutils too!
this wolfi box
this is the way, distroless distroboxes
@89luca89 congratulations on your commit landing in Wolfi! Just bumped the epoch so the package will get rebuilt with it 🙂
Fish is now in Wolfi
Uutils will stand in for coreutils now
And... I'm not sure what's going on with my Z Shell package yet
Okay, I have a baseline shadow package. Will be finishing this tomorrow. Need sleep
oh fantastic thanks! 🎉
great 🙂 shadow will need to depend on linux-pam (from what I see on alpine) and contain some additional pam configs, let me know if I can help
I'll definitely let you know :). Thank you so much for the help with this
Needs libcap too 🙂
yea probably shadow now also needs those if you want to create users with subuid/subgids
This would open the road to eventually run rootless podman inside wolfi, which could be useful as a replacement for docker-in-docker situations
I want that so bad. In Bluefin CLI, the hosts docker install gets symlinked but it only half works
All right, I've got a package for libcap 🙂
yeah, a fully transparent container, but I can run kind
that's so baller
it should be running from the host at that point and no different from outside the container. What's janky on it?
podman in podman would be awesome. Would hopefully be nice to work without init.
Things like terraform blow up because it can't tell whether or not docker is running. Linking the socket doesn't seem to be enough either
oh, terraform is in the container and docker on the host?
for podman-in-podman or podman-in-docker no need for an init, as podman is standalone (it can use systemd stuff if present, but it's not mandatory)
docker-in-podman or docker-in-docker needs an init, for dockerd
mmmh seems more like a TF-provider problem that is assuming some stuff, probably searching for the systemd service?
Likely. It's an unfortunate codependency
all this without init?
Yup. With brew
we are indeed in the future
the podman stuff yes.
Unfortunately it's half my workflow in a given day :/
Definitely. Absolutely fantastic we don't need an init system for any of this
this is the way
this is so the way
I am very interested in seeing this as a sysext down the line. Like, just use Fedora for the kernel/desktop, etc. and then wolfi userspace. Our dismantling of the distribution model would be complete. </vader>
Would be absolutely perfect. It's moving really fast. We're getting lots and lots of core packages into Wolfi every day
I am keenly aware. Saw smoser's first merge too.
bummer you stopped using the merge queue, but I'll live.
There were some issues with it unfortunately
I don't think it's coming back soon either
oh that's all good, I'm following via mail
I've reverted back to email as a better github notification framework, which is sad
I just have the app on my phone set to forward notifications to my watch. Not perfect but I see important things
yea first they need to allow R/W-table sysexts
I'm not even going to look at it for a while, this will do the job
let it cook 🫕
yeah for sure ... the box is working pretty great
yeah here's the difference I think and why we've diverged from fedora in the use case:
https://pagure.io/fedora-kickstarts/pull-request/1015
"Toolbx containers are long-lasting
pet containers for interactive command line use"
I mean, it's the same with distrobox, but if you use assemble, they can be ephemeral
"Toolbx containers are ephemeral containers for interactive command line use that don't lose your data or state."
that's what I wanna roll with
"but how? oh, it's just a volume mount." and then they just use it. 😄
@EyeCantCU
do you think that's because I'm trying it in a shell?
like if I did that in the containerfile you think that'd do the trick?
It's saying that procps package owns things that uutils wants to provide.
Apk doesn't want to overwrite files being tracked by another package.
Procps is probably a split package from coreutils
yeah I was asking because rj was just in this package so I was curious
sec let me find the link
https://github.com/wolfi-dev/os/pull/12613/files
I wonder if procps needs to be adjusted?
also I found they use this https://release-monitoring.org/ for release tracking, which is a fedora service heh
but for things already on github they can do 80% of the packages updated within 24h, that's absurd
Yeah. Need to see what uutils is providing.
duh pay attention castro lol, he's one step ahead
Looks like they are not using coreutils kill and uptime.
If he is using make, the make file has the ability to skip modules. So could probably split
I need to compare package contents and see what needs to be split
Ah I see you saw what I said lol
@M2 @89luca89 @bketelsen hey so, Brian and I accidentally found something
so since VScode in bluefin is already using docker, and we auto install devcontainers and all that, it's basically all set to go ootb.
we found that if we use distrobox with the docker backend, it makes vscode <-> distrobox bridging work ootb.
no weird toolbox wrapper or any of owen's toolbox scripts.
people complain about this constantly too. So the idea I wanna kick around is, would using the docker backend for distroboxes make sense?
I would prefer no. We always use rootful docker for distrobox
Yea docker==root it would be pretty ugly
also, when you use vscode->docker->distrobox it enters as root in the container instead of your user
(ad least from what I remember :P)
I think you can user vscode's container extension with podman without the bridging and stuff, if you don't care about the user tho
the plumbing was mainly for that
yeah so, it's going in as user
which I found odd as well
oh wait, no, not in that one
that's root
@bketelsen can you confirm yours is as a user and not as root?
# = root
oh ok, so it's the same problem as before
i'm more surprised that user = root didn't pop up from starship.
the character was caugh so the $ was now a #. But root is also supposed to be there
I could have sworn that was user last night. but it was late and in my defense I am getting old
damn why can't we have nice things
bash: brew: command not found
I started getting this this morninghow are you entering?
and check if /home/linuxbrew exists
profile in prompt
it does and all my brew stuff is in there
what is your $PATH?
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sbin:/bin:/home/jorge/.local/bin:/home/jorge/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin
I need to go to the dentist now though. 😦
the error seems benign?wait brew works?
yeah brew doctor runs and everything
oh wait!
Warning: /usr/bin occurs before /home/linuxbrew/.linuxbrew/bin in your PATH.
This means that system-provided programs will be used instead of those
provided by Homebrew. Consider setting your PATH so that
/home/linuxbrew/.linuxbrew/bin occurs before /usr/bin. Here is a one-liner:
echo 'export PATH="/home/linuxbrew/.linuxbrew/bin:$PATH"' >> /home/jorge/.bash_profile
ok gtg will be back laterWe have the opposite so that apk stuff overrides brwe
yeah.
ok so expected
but brew works. That sounds like brew is being used before the bind mount is in place.
ignore my issues, I'll test in a clean VM when I have time, heh
lol
i got it setup as my user
view -> command palette -> Dev Containers: Open Attached Container Cofniguration File...
then add
"remoteUser": "bjk",
and restart
the only error is when distrobox tries to set your user password
(in wolfi)
wolfidev projects/ublue/dxctl via 🐹 v1.22.0
📦 $ id
uid=1000(bjk) gid=1000(bjk) groups=1000(bjk)
but if you do anything before you set the remoteUser, it'll break things
I had to delete ~/.vscode-server because it was owned by root
that config file is at here
i don't know if you can set it globally somehowyou can use a wrapper on docker/podman to
handle that
you can also prepopulate the imageConfigs and namedConfigs
yea in the end you either setup
remoteUser
or trigger a login using a custom command su your-username --login
which is more or less what that plumbing in the docs is forDang I thought I had found the way
I feel like this is a war of attribution and the universe is winning lol
the real solution would be devpod with an "override devcontainer.json" option, that would use a set of devcontainer.json generated by distrobox at creation
sadly this feature is not planned for a long time internally :/
i love devpod. But that is sad
Ok so in that case it's all just down to a sane Prompt config
I think if we asked upstream for the ability to launch prompt in a new window with a certain profile that might be the way?
we have that
I thought I only saw in a new tab?
Sitting in the waiting room heh
i got a desktop icon for
wolfidev
when I did distrobox create and it opens prompt in a new window
so whatever we already have just worksOh! That's good to know
It might be time to dconf reset my PC heh
interesting, how do you force a set terminal to open with the desktopfile?
I stand corrected. We can't specify profile with new window. It will use your default profile. So, I need to figure out a method of persisting default
we have a wrapper for defualt terminal called xdg-terminal-exec.
And this gets picked up as our terminal
Meantime I was trying to dig into prompt's source, trying to make it call
distrobox
instead of podman when entering distroboxes,
sure it's complicated. 😓yeah, it seems to be trying to figure out what the default command is from podman ps.
But since distrobox uses --entrypoint instead I think it gets confused?
It definitely doesn't pick up all of the host --env variables that distrobox enter does
yea, it's not the same indeed
I think he used toolbox as the model, and toolbox handles entry a bit differently. They set the command to their
toolbox init-container
which is very similar to distrobox-init
yea the entry-point is generally the same (toolbox/distrobox init) but that's for the
podman start
part, for the toolbx/distrobox enter
part, there are various things that are performed differently
For instance
Toolbx uses an "allowlist" of env environment, so for every new thing, you need to either patch toolbx or do something in ~/.bashrc
Distrobox uses a "blocklist", so it allows everything except some stuff
This leads to many differences when used with promptyeah. I'm noticing that
I'm still beyond impressed that an sh script can do all of what distrobox does
well it's its advantage and disadvantage
It's easy to hack, easy to contribute and compatible with any POSIX (even BSD)
But it's also hard to do a lot of stuff (argument passing with quotes and spaces and slashes, for exported binaries for example) and generally speaking being SH is not seen as a positive these days
Still impressed
@EyeCantCU are the bazzite-arch images still maintained? Their PGP Signature is failing in my CI
Should be, I'll double check the last build
Nvm it was a bug in distrobox-upgrade 😄
Dang How much I love this CI 😂
@M2 ok at a keyboard now.
so this is what I was thinking -- we can create the profiles right, but as of now we don't have a way to set it as default right?
I have not configured that yet
oh, so we can set that?
It's another key
Just in /org/gnome/Prompt
my line of thinking was "when the user clicks on prompt launch it. But when they ctrl-alt-enter, do
--new-window --profile=bluefin-cli
- which is why I was thinking it might be good to file that as a feature request upstream--profile would be great
I'd be happy to file that
that would nice, I will still have a way to set default when creating profiles
when I'm using the bluefin-cli profile my "My Computer" opens up bluefin-cli, not the host, is that happening to you?
Correct. Your custom command is distrobox enter
ok I didn't want to file it and then it's like "actually we don't need it"
ah ok
That's why we still need the Host profile
understood
but for the keyboard shortcuts ... we could just --profile each one and that would fix that bug as well
yepp. Or use -- distrobox enter
prompt -x "distrobox enter bluefin"
would work too right?
I just happen to see it in the --help on promptyepp
but name of the container
if we did it that would would we still need the custom command?
the custom command is per profile
oh ok
right, now I get it
ok not to get ahead of myself buuuuut.
if it's per profile and we can open new windows with each profile then we could also colorize each profile right? Like bluefin-cli is Catpuccin, but for the Fedora box we could pick something blueish, orangeish for the ubuntu one, etc?
yepp
ok so endstate, if I enable all 4
I'll have 4 profiles
and 4 entries below "my computer" for each box
Above.
They are profiles
oh, so just 4 profiles + My Computer?
Host Profile + Profile for each container
right. That feels like the way to me?
I click on Host. I get host. I click on bluefin-cli I get a catpuccin themed window
like, first boot, you open terminal it tells you to
ujust bluefin-cli
you run that, it just makes the profile, and then I'll make ctrl-alt-t stay host, and ctrl-alt-enter calls that profile explicitly.Yepp
then if you
ujust whatever-we-call-this
and selecte ubuntu, fedora, wolfi it'll add those.it's currently called
ujust configure-terminal
but is a private just entrywfm!
https://gitlab.gnome.org/chergert/prompt/-/issues/98
ok more dumb showerthoughts
this /etc/apk/world file ... if we do the same thing as we do for the homebrew cellar, should that Just Work(tm)?
like I'm wondering if we could track that also in the bluefin-cli container
Different.
The world file tracks what packages are installed
right I'm wondering if on destroy/create we could apk add those, so the user can just keep on apk adding and then not have to deal with it, we'd transparently replace and rebuild nightly.
volume mount it
/etc/apk/world
I blame @EyeCantCU
Lmao
ok I was looking for where bluefin-cli defines the volume: https://github.com/ublue-os/toolboxes/blob/main/systemd/bluefin-cli/bluefin-cli-distrobox.ini
this was in the quadlet in a prior iteration iirc?
yeah. Those are very old
Will bring them back up to date this weekend. I think I have something for handling default profile better.
working through the toolbox pr's and builds, I'll sort out the errors in a minute
@EyeCantCU @Kyle Gospo I think I've found why it was failing my CI 😅
we have a space saving action, one moment
https://github.com/ublue-os/bluefin/blob/3e0e07405df19774a934f8c399cea5226a6315c9/.github/workflows/build.yml#L67
that will remove all the default stuff in the default ubuntu container and free up tons of space. Adds about 3m to the build time but worth it.
@Kyle Gospo https://gitlab.gnome.org/chergert/prompt/-/issues/98
is kicking off a new prompt build annoying?
atm yes, it's failing to build w/ Christian's patches for vte in place
I'll revisit and see what's up, ours is just a few days behind
lmk if you've got time to voice, I have to show you something (but it's no rush)
I can rn
100% gonna cough in your ear tho
@j0rge M2 and I found some inconsistencies between the Wolfi base image and the SDK. Sorry for all the failures. Currently working on building DX from Wolfi base with a DX package list
no worries I think I stepped on ya'lls toes there sorry!
but hey wolfi and bluefin-cli are marked as not required so if it blows up it's totally fine, I'm for sure erring on merging fast for these boxes
It's all right haha
Wanna see how the builds play out but... https://github.com/ublue-os/toolboxes/pull/57
Looks like uutils is rewriting procps, but it's unfinished. Marking it as providing procps without a complete reimplementation could be invasive... https://github.com/uutils/procps
if it's some wild use case that won't be useful for wolfi overall then we can just not do it?
I think it would be very cool. I just want to do it in the right way
Unfortunately that would currently involve a --force-overwrite or marking it as providing things it doesn't though :/. Being able to mark something as providing specific files would be ideal
Using make with uutils we can specify things I thought?
We could remove
kill
from uutils. That would be sufficientwhat I mean is, if we have larger fish to fry ...
though I just had fish for dinner lol
I have many fish to fry 🙂
They're all equally important
Shadow package is almost there btw
Super fun time
Libtool not installed?
Yup. That's the fun part 🙃
Quick question to @j0rge how are you shipping a "default" integration between distrobox/toolbox and vscode? I was fiddling around with vscode (I dont use it usually) and found that this simple podman-wrapper was working actually quite well https://github.com/89luca89/distrobox/blob/main/docs/posts/integrate_vscode_distrobox.md#third-step-podman-wrapper and doesn't need any json fiddling
Maybe you can find a way to ship this in a flatpak-visible-path and just suggest to set the docker-path config in vscode and it should actually work quite well
Both distrobox and toolbx
NB: this has the same issue as prompt's default behaviour (not a real
distrobox enter
) but if you ask me, this is way less of a problem for an IDE than a proper terminal, where I expect I can launch graphical apps and suchyeah at some point we gave up, about 3 months ago?
and just ship docker + the devcontainers extension.
no podman<->vscode stuff
got it, you can probably try that approach? seems way less of a workaround and seems to actually work from my testing
I'd probably just update that script to do the user/swapping only for toolbx/distrobox containers tho
@bketelsen ^^^
something like
yeah we could ship that on the image if we wanted to
this would be transparent for regular devcontainers (podman) and do an user swapping for distrobox containers
only problem with this is putting it in a path that the flatpak can read
basically what we did is made vscode do only docker, and kept podman for distroboxes
i'm using ~/.local/bin/podman-host
we have vscode on the image so reading podman-host on the image shouldn't be an issue
yea, just sharing some knowledge, maybe it's useful 🙂
indeed!
aaah ok easier then
maybe this is the play and then just put it behind a
ujust vscode-podman
and then we're done?I'm using the flatpak, with this trick it instantly-worked with distrobox
I have elected the dumber and longer route of engaging with MS on the flatpak
actually let me file this as an issue to capture it
ya it can be, only thing is that then the user needs to setup its path for docker to this script, in the settings
because the prospect of a wolfi terminal in vscode would be pretty awesome
yea I mean
I think, until devcontainers gain traction, a regular user would "open terminal -> pkg-manager install stuff" and then when they open vscode, they expect to find that stuff there
infuriating
but I get it
yea transitional phase
depends on how much opinionated one wants to be
it's one of those. "I refuse to bend the knee to the old guard" then it's also like "fine, if it means less people complaining then that's better."
hehe both very valid
especially the second lol
https://github.com/ublue-os/bluefin/issues/883
"ut core desktop will also come with pre-installed lxd containers you will be dropped into if you spawn a terminal, there you will be able to simply still use apt/dpkg (just containerized) to install anything from the existing archive ..."
from ogra in the omg comments about ubuntu core being delayed
almost as if we're all working on this stuff for the past 2 years or so lol
hey wait a minute, I can just add this script, it won't hurt sitting on disk
yea I mean 🤷♂️
podman-host
is what we're going with?Yeah. This is like what wee we're aiming towards prior to going with default dev-containers.
podman-wrapper I think would make sense. Clearly its a wrapper and would be obvious what is needed to change from default
alright, let's do it: https://github.com/ublue-os/bluefin/pull/884
if anyone here can post a
podman inspect
of a toolbx container, we can easily add support for those tooI mean, devcontainer religion aside, bluefin-cli with apk AND brew? People will use that I think.
I guess yes 🤔
thanks!
@j0rge wait a bit before merging so I can also fix some corner cases in the script
drafting it, no rush!
also, not sure about using flatpak-spawn. We are on the host?
we are
also if the flatpak ever becomes official this is a nice thing to have in place already
@j0rge this works with both toolbx and distrobox, and works both inside and outside flatpak
niiiice
Still has the
NB: this has the same issue as prompt's default behaviour (not a real distrobox enter) but if you ask me, this is way less of a problem for an IDE than a proper terminal, where I expect I can launch graphical apps and such
But at least it's a proper IDE to work with, also in flatpakhttps://github.com/ublue-os/bluefin/pull/885 did this as I cannot edit your PR
GitHub
feat: add podman wrapper for vscode by 89luca89 · Pull Request #885...
This script should be set up in VSCode as Docker Path like this:
This wrapper will adjust the USER for Distrobox and Toolbx containers only
This wrapper will work both with host-installed VSCode a...
@bketelsen new kernel, a working terminal with vscode. Anything else your highness?
mwahahaah S-tier linux setup
I like this a lot.
We now have both dev containers and toolbox/distrobox working.
@89luca89 for Devpod due you use it with podman or docker? I've had weirdness using dev-containers (things like features were constantly breaking builds with podman)
docker
so the reason we ended up at docker
is every tool we added, like devpod
or like, every extension in the marketplace
just works ootb with docker but not with podman
and we didn't want to ship snowflake configs for every little thing when we could just ship with docker and then all the popups go away in vscode and all the other tools and transparently just work
@bketelsen can you try this but with
/usr/bin/podman-host
? https://github.com/ublue-os/bluefin/pull/885GitHub
feat: add podman wrapper for vscode by 89luca89 · Pull Request #885...
This script should be set up in VSCode as Docker Path like this:
This wrapper will adjust the USER for Distrobox and Toolbx containers only
This wrapper will work both with host-installed VSCode a...
I have a colleague I'll set up with this on Monday. Literally was walking him through the wrapper idea earlier this week. This is perfect timing
gah did I forget to +x the stupid thing
Lol
One downside of vscode.
There was this package in emacs that would automatically chmod +x a file if it started with a shebang
It actually might just be a built in configuration.
ugh I can't be bothered with git cli right now
this web ui has weakened me
I can't believe we got that feature in Prompt almost right after asking for it
keyboard shortcuts have been broken for months now they're going to be fixed
With colorful prompts.
the wrapper works perfectly
start the distrobox, then
attach to running container
then pick your folder
delightful, amazing, perfect
thanks @89luca89 ❤️
@j0rge ***\THE FINAL SHAPE IS ALMOST COMPLETE!
can you snag some screenies for the docs?
Oh my.gosh this is awesome
You all are really hitting it out of the park
your perfect wolfibox is almost complete.
I was meaning to get more done today but something came up with work so I got overtime in instead haha. Have plenty of time this weekend though
I use podman but as @j0rge said is not as smooth sailing as docker
But I personally value more the added security of roofless podman
trying podman-host. It failed on both distroboxes and toolboxes. Distroboxes it would give me a failed to connect to websocket and toolboxes it would try and install the .vscode-server to /root/ and fail due to lack of priviliges.
Toolbox
and now distrobox worked this time after nuking ~/.vscode-server
you're running rootful?
nope.
distrobox is now working after nuking ~/.vscode-server from my home directory.
toolbox for whatever reason is trying to install into my container's /root/ instead of my user's $HOME
maybe that dir ownership was broken by a previous docker attempt? That happened to me two days ago
sounds like toolbox isn't mapping your user
I think it doesn't know where to install .vscode-server. Since it has my permissions and is unable to write to /root. I wonder if it thinks that the $HOME directory in the container is /root
another thing is that we don't get proper integration. So distrobox-host-exec isn't working
another toolbox just worked and I got dropped in as root. But it was using a quadlet to set that one up.
but it installed everything in /root
so nothing seemed wonky in $HOME
to get distrobox-host-exec working, just needed to export XDG_RUNTIME_DIR.
I remember seeing errors related to this but couldn't remember where
yeah I could fix with export XDG_RUNTIME_DIR =/run/user/1000. So probably could add that as an --env to the podman-host command
Yes that's the problem with this (and prompt) approach of using directly podman instead of distrobox (or toolbx)
There are actually various things to setup
I'd like to just intercept the "terminal shell" and replace that with the proper distrobox or toolbx command but didn't manage
In addition of the user-swapping or it would set the vscode server as root which is not ok
I previously would set { remoteEnv } to get things going. But that was on a per distrobox basis
Yea wanted to avoid that
I mean we could do a simple
for i in $(env); do ...
and append the host env to the podman command
it's not super-ideal thoNeeds testing, but...
https://github.com/EyeCantCU/os/commit/0c5b69be0ad039bd7474788160c862a77eed57ee
Interesting conflict:
Another reason I should look into per file replacements...
Mmm...
we need to get the stuff from utils-linux moved to linux-pam.
can now have the prompt add profile set things as default profile and the colors.
need to work through a funny bug though
Yup. In spite of both being installed, I still run into that error. Messing with some things
did someone mention per-profile colors?!?!?!
dun dun dun
Yepp that's working.
Now working through making sure we don't add an already existing container to the active uuid array
Basically. If you are using a managed container. I set a custom dconf key. It will remove all of them on logout. But if one of them is set to default. It won't remove that one.
For things like opacity. It reads the value from the default while being created
Looks wonderful. I need to fix that Wolfi ASCII...
from kyle
you just need to colorize it
^^ there you go
Thank you guys!
The Linux-pam package has the post-install for su and friends but the files are not in the package.
They are in util-linux, but Linux-pam expects them to be with $source
Thank you! I'll fix that up
@EyeCantCU @M2 @89luca89 @bketelsen @Kyle Gospo I just realized. We're not just making a distrobox. When you add it together with Prompt we're basically competing with Warp. Like, I want the stuff they have so bad, but it'll never be OSS.
https://www.warp.dev/
and stuff like what you've assembled so far is already amazing, it can only get better from there. We can curate the best tools we can find for stuff like this and then there'd be no reason to run anything else.
And Kyle and I wanted those fig completions so bad we were all in the discord trying to get us more access to the linux port and then they cancelled all that, so just having inshellisense is so huge!
Our AI portion can be ChatGPT shell CLI 🤣
yeah, and use a charm form to paste in my openai key
we should do that, seriously
I think so too. All of us use ChatGPT
there was a cool oss one someone was working on
it was OSS
Wave Terminal
An open-source, cross-platform terminal for seamless workflows
check this one out
I'll take a look 🙂
haven't tried it in a while, trying it
This looks really cool
the fact that warp and wave have their own terminal let's them do all that nice tight integration. But we have the flexibility of running in any terminal.
in their settings
where we'd use charm.sh prompts instead. Which could look very dope
Let's definitely do this
Feels like we already are, I feel like we just didn't see it that way.
oh I'm ashamed I didn't file issues for charm.sh for wolfi, I'll file those tomorrow
I didn't realize we were missing the charm stuff. Definitely file one when you have the time 🙂
should we rebrand this?
oh, and then later down the line we offer it as a sysext and voila, fedora base + wolfi userspace. it can be like a fully integrated swap out, like do it right on the host, that'd be baller.
That one's up to you. Personally, I feel two ways about it. On one hand, it's been a core of Bluefin for a while now and on the other it really does standalone
I think I've fixed linux-pam
I've got working shadow
oooh!
Should have the prompt integration working better now.
It can setup pallette, by default will set fedora, Ubuntu, and bluefin-cli if using quadlets.
It will clean up stale profiles.
And it won't add duplicates of the same profile.
Woohoo!!!!
Will get the pull request up in a bit.
Next up will rework the configure terminals script.
The simplified bluefin-cli one was updated as part of this.
Nod. I'm at kids soccer but will check when I get home unless someone gets to it first
PRs opened to Wolfi adding Shadow, fixing linux-pam, and fixing uutils
Nice!
in hindsight maybe we should have done
wolfi-base
and wolfi-sdk
instead of -dxWe can rename them if you'd like. Had to bring in the maximize build space action for DX :/
that seems odd
It's the goimports
but noticed that. Maybe we should just leave them but I think once we're done churning we should probably PR the base images here: https://github.com/toolbx-images/images
That's a good idea. We can definitely.do that
I maintain the alpine images there and timothee has all the CI set up perfectly
but that can happen some other day, M2 just PRed the needed quadlet changes, so just need the wolfi merges + prompt rebuilt and we should be at MVP
Sounds solid. Building Wolfi right now to make sure I didn't just make the container as big as Bazzite
I'm joking 🙂
:clueless:
"oh no I put Steam in there"
Lmao
alright, what's the tldr on this vte thing for prompt?
@Kyle Gospo check it out
"The Final Shape is the name used to describe the ultimate goal of the Witness, and possibly the Darkness itself. It refers to a potential end-state of the universe in which the Witness intends to carve and calcify the universe into a frozen state of eternal perfection."
our perfect unix is nearly complete.
Queue orchestral song with singing monks
when I do this demo at this conference, there will be music.
Realized we had a 5.6gb image because I wasn't clearing out /tmp. Fixed that. Outcome will be interesting
heh
bluefin's rebuild is almost done with m2's fixes
Nice!
What if we dropped buildah for Wolfi/Bluefin CLI and used apko
687mb compressed, 2.2gb decompressed
I wonder if we could get smaller images
hell yeah let's do it!
Awesome! I'll get to work
that's looking pretty good
Not bad
@M2 everything appears to be working except the fedora profile wasn't created, the others one were though
Hmmm. I'll check that
all of these work!
Is the fedora-toolbox enabled?
yeah file file is on disk
Okay
oh, inactive(dead)
aha! restarted the service unit and it appeared right where it was supposed to be!
So baller!
If the target is enabled. It won't delete the profile. So you can modify them.
oh yeah this feels good!
So if you want a different palette or make it the default
Just to confirm - having shadow also fixes toolbox integration? Otherwise I can see the Wolfi toolbox naming being kind of off
Actually... I'll try real quick
Seems okay
You all see this before?:
you are doing something with brew there
do you have the add the brew stuff into path using brew?
They figure out what shell you are using by using ps
https://github.com/charmbracelet/mods I was thinking of something like this for an AI assistant
Jumped into Bluefin CLI and saw it
Looking 🙂
Okay. I'll try a fresh box shortly
there's also this, I was thinking this for
tldr
: https://github.com/dbrgn/tealdeer
8 tldr client implementations = lol
though we should perhaps put both the github copilot cli and mods through the test.Just want to take a second and appreciate the simplicity:
This is awesome
I can't wait to see if it's smaller!
I can't either. Also exciting to use the tooling we use for this
My life is yaml and terraform
I basically have been watching all the warp terminal videos for the past 6 months.
I don't think we'll be able to compete with the $6m they received in seed, $17m in series A ... and $50 MILLION they got in Series B. Lol.
for a terminal.
Fresh pull and couldn't reproduce. Are you using ps anywhere in your start up file for your shell.
I've seen that before when using brews stuff to add their stuff to your path and environment variables
WTF
That's... That's a lot of donuts
No, it's very odd
I might have screwed the container up testing things earlier
I just thought it was peculiar given it only happens on entry
I just realized, the best part of this is there's not going to be any lag between when the Wolfi images and Bluefin images are built. So Bluefin CLI will always be using the latest Wolfi config
No waiting for Wolfi changes to be merged before they roll over
I had one instance where bluefin definitely built before Wolfi when making a change to both.
Yup. That's because it currently builds from the image on GHCR. All it's doing is layering on top of it. Image builds will be slightly longer, but now we can include the image config directly
We trade one benefit for another
wait, that's awesome!
Will need to get a handle on build sizes.
Somehow had 100 GB of stale images pile up
I have my castrojo/bluefin-cli cronned to go 1h after upstream bluefin-cli, are you saying I can make that more dynamic?
Think of it as having a staged build in a Containerfile
or does that only work inside the same repo?
Yep, you can. You can include the config on github and build at the same time
O_O
omg we should 100% do this for boxkit also
But again, keep in mind you're not building from the image. You're inheriting it's config so it's building from the ground up
Definitely
This would make a lot of sense for boxkit
right but it's wolfi, that's still a fraction of the time vs. ubuntu/fedora/whatever.
Exactly
I do
ujust clean-system
regularly
oh also handy ... topgrade --only containers
./toolboxes/wolfi-toolbox/${{ matrix.image_name }}.yaml
what's the yaml file look like?doing that
topgrade --only containers
is nice.topgrade -v
is nice tooI love topgrade. Made using a Mac for work painless as far as maintenance goes. One command and done
Can't imagine manually updating all that stuff
the -v shows you what it supports but isn't on your system
I think they literally support everything
I might add git updates to my local config to keep all my repos up to date
That's what I want. Git repos
Okay so, all the configuration we do in the Containerfile will be done via a base package built with melange. The local repository will get passed to apko when the image is built. So like, Bluefin CLI will have a bluefin-base package that contains modifications typically done in the cf
This is how it's done for Wolfi base and the SDK. I think this is really cool and the way to go 🙂
can't wait to see it!
also dan merged in all your uutils fixes, kicked off a build
Haha! NICE
AND WE HAVE SWITCHED TO UUTILS!
Still need to incorporate melange for the base packages but Wolfi builds and makes it to GitHub in 14 SECONDS!
Also, we're getting aarch64 now
WHAT
https://github.com/ublue-os/toolboxes/compare/main...apko-build
ok
this is awesome
this is so the way
GitHub
GitHub - chainguard-dev/yam: A sweet little formatter for YAML
A sweet little formatter for YAML. Contribute to chainguard-dev/yam development by creating an account on GitHub.
For sure! And it's very straightforward. I'll be diving more into it tomorrow. But so far I'm very happy with this model
this is so great!
I just can't believe how fast the build is. Adding packages will increase that but it's just crazy
at some point where we want to keep track of apk's state like we do for brew, we should totally do it this way locally right?
like when rebuilding the image + my extra apks.
3m44s (buildah) vs 16s (apko) for wolfi-sdk. Insane!
oh dope, .sbom's in the registry along with the build!
Hmm... That would work easy enough for packages but require rebuilding the image locally. Which could be hard with the source image config and package configs being remote unless we package them on the image. Whatever package configs we add have to be built and typically generating an APK takes a minute. I'm not sure how the speed would compare but it's totally possible
ah, ok I'll put thought in using this for boxkit then since that will be really useful for others
ok so kyle put the latest prompt in bluefin, if you're up to date
prompt --tab-with-profile=bluefin-cli
or any of them just launches a host terminal. So I think it's still busted? I can't get it to work with or without --new-window
@j0rge use the uuid
not the name
OOHHHH
that's why m2 set those specific UUIDS!!!!
<--- has full clarity
@Kyle Gospo say it
bad news too
that video is the previous build 😂
so this has been working
at least we're up to date now
hmm still nope
unless my uuids don't match what was PRed
if you want to meet I can walk through it w/ you real quick
yeah gimme 3m to bio
in chat
ok it looks like --new-window doesn't work
and my fedora-toolbox seems to only want to launch ubuntu?
https://github.com/ublue-os/bluefin/pull/903 alright got the PR in, I'll keep testing new-window and see if I'm just messing it up
it's "Return" not "Enter", heh, so bluefin is missing, PRed the fix.
Just a heads up. Aarch64 builds are gonna be sloooow
At least for the SDK
This process is just making me want Wolfi copr
Yeah and merge queue continues to be funky at times.
ok so I'm having a hard time understanding how the other boxes end up making ubuntu containers and not the specified images in the quadlet
like if I select fedora or bluefin from the prompt dropdown it makes a new ubuntu container instead
(which is the default fallback when it doesn't know what to make)
I have enabled all 3 of the service units on 2 machines
That shouldn't be happening.
I'll add a check to make sure the container exists in the custom command.
But I haven't seen that behavior for when the quadlets are enabled and the containers are running.
I've only seen it when the container doesn't exist yet because the service isn't running
But the profile should only be there if the container got created
the host light theme doesn't look right either, I'll find a more suitable one and PR it in
Host has no theme at all right now.
yeah
It didn't make a host profile either
this will be a ton easier to debug when we have an iso lol
I would just test on a clean machine
Yeah. Let's do this via a skel or something
also, worst case. throwing this out there
we could just ship bluefin-cli automated and then just leave the rest to the user
I am also confused by just now
we have
ujust setup-distrobox-app
, distrobox-assemble
, and distrobox-new
Distrobox-assemble is based off of assemble, distrobox-new is used to handle boxes that have things outside of the assemble file. One of the arch ones we do different things if the DE is gnome vs kde (Bazzite).
I think setup-distrobox-app is a Bazzite one for exporting apps from a distrobox for some of their just recipes
Oh and there is toolbox version for some of these as well
😦
Yeah. We have a ton of recipes.
At least we've consolidated to more of a menu thing for distrobox vs having a bunch of individual just recipes.
nod
that can come after lol
Also we still need to move the fedora and arch boxes under the toolbox repo.
fedora is there, the one that isn't is some other one - I think we forgot why it exists lol
It's technically our upstream. It just has distrobox as the name instead of toolbox
oh we can probably consolidate?
Yepp. Pull their containerfile in.
Then make sure the GitHub action is corrected.
And figure out a way to keep both names for pushing to ghcr
Since that box is quite established.
https://github.com/ublue-os/fedora-distrobox we archived it already
I think we did this already?
We didn't....
https://github.com/ublue-os/toolboxes/blob/main/toolboxes/fedora-toolbox/Containerfile.fedora
Okay that will need to be done asap
GitHub
toolboxes/toolboxes/fedora-toolbox/Containerfile.fedora at main · u...
Centralized repository of containers designed for Toolbox/Distrobox - ublue-os/toolboxes
OHHHH
I know how to fix that, sec
We also need to fix our GitHub action for this guy as well. Since we were doing signing checks.
I'll work on that
We need the Containerfile from the archived repo as well plus our small adds.
oh god I'm not doing that, whoever uses Fedora can do that. 😄
I don't even know how this box ended up this way it doesn't look anything like the others
did we inherit this from someone maybe?
Maybe? I'll work it tomorrow now knowing that we archived the other one.
I'm working on some sliders right now instead. 🙂
I believe these are called steamed hams in your neck of the woods
Sounds tasty
I'll be in voice after I finish with the others if you wanna chill
noel made some pretty exciting breakthroughs ISO wise
Okay. I'll hunt down what's going on with Ubuntu and fedora quadlet for a little bit
Tomorrow I'll fix up the fedora containerfile
Fedora quadlet seems fine. Ubuntu is having an issue with entry point.
Okay.
So here's an oddity. Apparently ~/.config/containers/systemd doesn't override the ones in /etc/containers/systemd/users
Well. Correction. If they have the same name they sorta do. But the original named one gets started but then a new one gets made.
lemme ask kyle what's up with this box
Zsh merged
Okay. Odd but for whatever reason the Ubuntu box took forever to spin up under the quadlet.
oh boy, this distrobox is important for bazzite. unarchiving, kicking off a build. And then we can consolidate later.
profiles are working
!!!!!
how lol
ubuntu for whatever reason took like 3 minutes to come up (still not sure why) and it was failing the first few times do some sort of exec error
This is looking amazing
oh thats sick
PAM config will be merged very soon 🤓
And IT'S IN
Halfway through a very long build and very conflicted on whether or not I want this space heater running all night
PRs opened to toolboxes for Zsh and shadow. I am so happy we finally have these
very cool to build a distrobox with apko
It is! Though I am starting to see a few cracks I'm going to discuss with tooling
what's your workflow for testing these locally? https://github.com/ublue-os/toolboxes/commit/30ab0350344e725230d8ef229ab6a1724d3ef55c
I have to read through the wolfi-dev/os Makefile to see what make package/* actually does :keknervous:
The action or building packages?
There's not really an easy way to test an action locally. I usually work on a different branch and test it there
For packages, it really depends on what I'm building. Checkout https://edu.chainguard.dev
Specifically
https://edu.chainguard.dev/open-source/melange/
yeah I meant the packages. I've contributed a couple packages to wolfi but haven't built anything outside of the os repo so far
oh nevermind, it's literally
melange build file.yaml
😄@EyeCantCU shadow and sudo-rs still conflict over su.
I wasn't aware that was a problem :/
I'll look into it
I think that would require per file replacements
Probably. I'm guessing sudo-rs has an su implementation.
Yeah, that makes things rough because we don't have that yet
We can remove sudo for a little bit and use fake sudo
Which is just a wrapper around su-exed
That would work for the meantime
Nice!
We lose sudo flags for now. Need to make sure things still work fine enough.
I can see what I can do about fixing this later today
Awesome. But will play around with stuff now that shadow has landed.
Also need to make sure all of the stuff works for zsh
If we could get oh-my-zsb ootb, that would be killer
I'm looking at:
fzf-tab
Zsh-highlighting
Zsh-auto-suggestions
Any others?
That looks good to me. There's one more I use. One sec
Ope it's Zsh-syntax-highlighting which you've already got on there
Starship can also be enabled as an omz plugin
Something is still failing for using toolbox to initialize wolfi
Initializes and then immediately exits
Right now the only thing I can see is that toolbox thinks the container is a fedora image
Wolfi responds with an incorrect signal to conman being set up.
I really wish the log was more descriptive
I'll have to trace it later
To see what's going on run with toolbox -vvv enter $wolfi-toolbox.
I'm going to make sure the other stuff is working cleanly. I think issues were from other quadlets named the same thing right now
Debugging something that takes half a day to build :/
what's it building?
102mb for wolfi-base, yet to see the outcome for wolfi-sdk
Building the SDK takes a few minutes
It's funny watching aarch64 crawl and x86_64 zoom on the GitHub runners when I have the opposite experience all day long every day
I've got a better idea. With the SDK specifically, let's just retrieve the build artifact and resign it with the melange key we generate
@j0rge @M2 solved the distrobox-enter integration with vscode, now it properly executes all the "podman exec" as "distrobox enter" thus having the right environment, variables, and integration, with display/GUI and host-spawn/dbus
need to polish out the script, but looking good now I can almost try vscode again 🤔
luca do you sleep?
ahahah
no.
I have lots of questions but I'll start with a softball. What color scheme is that? It kinda looks like gruvbox
and this is using flatpak too
gruvbox-material, both vscode, terminal and vim
Amazing. Absolutely amazing.
can anyone here run some tests with this?
This will NOT support toolbx, primarly because It's more involved than the podman/hack before and I don't have toolbx installed to debug 🤷♂️
Yeah. I'll give it try. I think not trying to support toolbox is fine.
and preferred!
Cool if also your tests are successful I'll open a PR tomorrow 👍
Anyone here familiar with retrieving info from GitHubs API endpoints? Need the run ID of the latest successful run of a workflow
This works with a hardcoded run ID: https://github.com/ublue-os/toolboxes/actions/runs/7892728990/job/21539854217#step:5:1
Our wolfi-sdk is now only 1.0gb. A lot better than 2.2
Official is like ~650mb so it makes plenty of sense with the package additions we make
Oh gosh...
curl -L https://api.github.com/repos/wolfi-dev/tools/actions/runs
ooo this is better. Now I just need to parse with jq: curl -L https://api.github.com/repos/wolfi-dev/tools/actions/workflows/release.yaml/runs
GOT IT 🙂It works with ubuntu and fedora with shared and own home. Our wolfi box doesn't work due to a check of /tmp/.X11-unix. It tries and checks existance of /tmp/.X11-unix/X3 which doesn't exist making it think it needs to create /tmp/.X11-unix which it doens't have the permissions to
/tmp/.X11-unix/X keeps incrementing by 1 until it fails.
This looks like a bug with dev-containers and not distrobox. Neither distrobox-enter or distrobox-init check these
dumb workaround, but if you chown /tmp/.X11-unix it works for this box.
okay. Now something very odd. It will only connect to one of the boxes. (Shared home). When selecting a different container it doesn't actually connect to a different container
It's also not a fan of quadlets (which was working for the first connection)
And now extensions in the remote connection seem to fail. After a few minutes it will say remote extension host has died. Then failed to restart
keybindings working
Guys, I'm sorry and happy (you'll see why) to say we are not getting a run directive in apko. Everything we do will have to be through packaging or via contributing to tooling. Got linked to this (for the second time). It's excellent. Highly suggest you all read it: https://www.chainguard.dev/unchained/images-as-code-the-pursuit-of-declarative-image-builds
Images as Code: The pursuit of declarative image builds
Chainguard's CTO Matt Moore describes the process of creating a declarative container image build for Chainguard Images. Everything you need to know about securing the software supply chain.
Wrong place :/
Wait no
Right place
ok that inspired a thought. I read the article and it linked to go-apk: https://github.com/chainguard-dev/go-apk - which lets you build out a memory or folder-backed tree of an apk installation. What if we could marry the distrobox/sysext threads here together? Use go-apk to install all the bluefin-cli tools (and whatever else would be useful) and package it up as a sysext? since sysext doesn't carry
/etc
, you couldn't/wouldn't worry about updates, just download a new build of the sysext. I don't know if it's possible, but it's an interesting idea to explore.GitHub
GitHub - chainguard-dev/go-apk: native go library for installation ...
native go library for installation and management of apk packages - chainguard-dev/go-apk
Brilliant. I have an idea on how we'd do that. Produce the artifact during the build process and retrieve it that way.
I think we're at a point here where this has long been deserving of a channel but at least the thread is indexed
just had dinner with like 5 chainguard folks
I did the container demo and then I was like "we want to do this as a sysext"
and we couldn't think of a reason for that not to be a wonderful idea
container will handle today, the systext makes it future proof
and I feel like if we take the OCI we have now but also spit out a raw file, that's a way smaller thing to bite off than "let's do all of dx"
but like we just scope it to "grab bluefin-cli OCI, export the tarball into a raw file, and then apply it onto the host OS"
@M2 sooooo... Found out why sudo rs conflicts with shadow. Login utils weren't being copied properly to shadow-login so they were in the base shadow package :/. Submitted a PR
Mmmh this is surely because vscode blindly puts the server in ~/.vscode-server, so shared home means all use the same server, indeed being then stuck ad the first one
Will experiment if we can hijack the path in this wrapper script and put it in a non-shared path
Mmmh this is with flatpak or native?
And seems like VSCode doesn't support this shit... https://github.com/microsoft/vscode-remote-release/issues/6079 https://github.com/microsoft/vscode-remote-release/issues/472 the feature request is open from 5 years almost lol
GitHub
Custom remote installation directory for Containers · Issue #6079 ·...
#472 added an option to chose a custom directory to place the .vscode-server directory in SSH remote targets. #940 had the same question for Containers but got closed as a duplicate so I can't ...
GitHub
Choose different remote installation directory for SSH · Issue #472...
By default, VSCode Remote SSH "mounts" itself at ~/.vscode-remote. I have two different servers that share the same home directory that's network shared, so I can't connect to bot...
and seems the vscode-server injection is done before calling the wrapper, so we cannot hijack it :/
Wow that's unfortunate
This was native.
Overall, custom home worked out fine except for extension manager dying.
Shared home 100% borked now and I think I need to reboot/wipe tmp.
@M2 @j0rge now it symlinks each
.vscode-server
to a unique path for the container, so that in shared-home situations it will still use a per-container server, plus, it works also for custom-home containers
Still it will be transparent for regular non-distrobox containers(also now is pure posix sh, instead of bash)
will give this a try!
Still don't know about the X11 socket thing, it shouldn't affect normal usage (it's just growing 0-size files) but seems to be vscode's bug
yepp. It's vscode and only seems to affect some containers. For mainstream distro's it didn't trigger
@89luca89 This seems to work with Distroboxes with caveats.
They don't work well with quadlets (still using distrobox enter) due to extension manager breaking. For shared $HOME they fail to load due to some issue with stdin. Custom $HOME worked okay.
Using normal distroboxes, there is still some oddities. The extension manager is in a kinda odd state. Running things through the command palette fails, while things done directly in the extension work. An example is that I was unable to create a new file with <C-S-p> new-file. But making a new file by pressing the icon in the explorer worked.
Thanks will check what is going on 👍 about the quadlets, I suspect it's because there might be some differences between a quadlet-generated container and a proper distrobox container
extension manager is working fine for me (as do ctrl-shift-p commands) are there any examples of stuff I can try?
For the quadlets we have a list of them here:
https://github.com/ublue-os/toolboxes
with an example of the fedora one here: https://github.com/ublue-os/toolboxes/blob/main/quadlets/fedora-toolbox/fedora-distrobox-quadlet.container
@M2 the quadlets do not have the right label for distrobox, they need
manager=distrobox
to be recognized correctly from the podman-host script
ah no they have it sorry my bad
need some rest probably 👀
But I notices, at least the fedora-toolbox you linked is missin --pid host
and --volume /run/user/1000:/run/user/1000:rslave
I assume it would be Volume=%t:%t:rslave
and --pid host
just using a PodmanArgs
yeah.
Volume=%t:%t:rslave
is --volume /run/user/$UID:/run/user/$UID:rslave
I'll make sure those are on all of them along with PodmanArgs=--pid host
not sure if that is the problem with the podman-host integration, but probably not having
/run/user/$UID
is one of the key integrations of distrobox so that might be influentialthat would be a mistake if i'm missing those two. I was trying to mimic what I saw in podman inspect for a distrobox
Yes its a bit tricky but you can just do
distrobox create -d
and that will print the podman command to execute
And from that you recreate the quadleti'll do that next to make sure the quadlet run command is similar.
is there a difference between using a podman run vs podman create/exec
Run is create+exec so for run you need to have flags from boh
okay that's why I was using podman inspect
the good news for the quadlets, is we are using distrobox enter for getting into the box
👍 yep it's good, on the other hand,
prompt
is still not doing it, and I really need to refresh my C skills tho :/I built a go app to realize an alpine filesystem with this method. Notes: It worked really well. less than 40 lines of code to create a full alpine tree with
curl
and vim
installed. I don't think it will work for a sysext though because the resulting /usr
directory contains too many things that would break the host when overlaid. I'm looking to see if there's a DESTDIR type option so we could install packages to /opt
or something.You could Mount an overlayfs on top of the alpine rootfs and then save just the overlay
There is also this https://github.com/flatcar/sysext-bakery
GitHub
GitHub - flatcar/sysext-bakery: Recipes for baking systemd-sysext i...
Recipes for baking systemd-sysext images. Contribute to flatcar/sysext-bakery development by creating an account on GitHub.
Speaking with Thilio from flatcar, they have a very nice SDK (gentoo based) that let's you create sysexts using subdir targets, for example /usr/local/myapp/{bin,lib..} Thus making possible to have sysexts with bundled Libs and glibc that won't affect the host
Thilo is my coworker!
I was experimenting creating a gnome sysexts but I'm still familiarizing with stuff
Wonderful guy!
the flatcar sysext bakery was my inspiration for all of these experiments
Then you already know everything 😂
nope! I didn't know bout the SDK for subdir targets
GitHub
GitHub - flatcar/scripts: image build and composition scripts for F...
image build and composition scripts for Flatcar Container Linux - flatcar/scripts
It's the SDK container in here
It can be used with their script or with distrobox and let's you cross compile stuff
By default it uses flatcsr's repos, but you can override them with Gentoo's
Still exploring the thing tho, too much stuff at once 😂
there just isn't enough time to play with everything :/
s/gaming/programming/g
looks like noel and co have the installer handled, so we're so close!
hey Jorge I'm curious I've seen that the releases are already bootable isos where you can choose ublue/bluefin (dx, framework and so on) what is different in this upcoming installer? where can I read more about this?
the current one is some kickstart hack thing we did, it's very unreliable because anaconda's support for installing an OCI container was rudimentary
https://discord.com/channels/1072614816579063828/1192504002252914791
we did that a long time ago, recently lots of things have been getting fixed all over the place upstream so noel volunteered to look at all of it and see if we can generate a proper offline ISO.
ah got it! thanks will catch up with the thread 👍
that can hopefully install a signed image, right now we do an install and then ublue-updater will rebase you in the background to a signed image, which works but is still too scary.
thanks a lot for the overview! I'm installing bluefin on another laptop rn 🙂
and was curious because I kept hearing about the installer (I was supposing to do a rebase on top of plain SB) but then found the isos on GH and was confused
@M2 we can restore sudo rs
I'm going to have to check this out 🙂
If interested I've come up with this little script that will generate a vscode-remote url to open a project directly in a distrobox (either from host or flatpak)
vscode-distrobox name-of-distrobox /path/to/project
Oh hell yeah this is 😎! I wonder if we can add this to bluefin?
GitHub
feat: vscode-distrobox script by bketelsen · Pull Request #914 · ub...
From @89luca89
vscode-distrobox name-of-distrobox /path/to/project
Cool thanks!
This still requires the podman-host script to be set in the configs
I think we ported that script as well
We have the older one right now
is the new one working well for you? It is for me, but I'm not that big of a vscode user tbh
It's a lot better than what we currently have. It allows us to have shared $HOME and have a proper integration with a distrobox.
I haven't tested with flatpak but I think my oddities were more due to quadlet/Wolfi since the wrapper was flawless with every ephemeral Ubuntu/fedora container I tried.
If someone puts up a pull request I can review it.
Is there an issue with putting local installed code first in the if - elif block?
sure it's probably better yes
cool will open a pr then 👍
https://github.com/ublue-os/bluefin/pull/915
@89luca89 neovim or helix user?
Halfway into porting Podman to Wolfi. Podmanception soon™
Will you be able to handle the subgid/subuid in the package as a post run script?
Hmm... That I am not sure of. That might have to be done via apko when the image is created
And that of course would have to be added
But it's something I'll investigate
Going to get image builds wrapped up this weekend as well
classic vim 😛
I've just been hooked on the packaging workflow
I'm ashamed to admit I stubbornly use nano and I really need to change that. I've been trying out Zed on Mac OS and really like it. Has CLI integration like VS Code too
I think you can dome some testing with https://github.com/89luca89/podman-launcher to sort out dependencies without having to worry about the podman's package in the meantime
Oookay this is the coolest thing I've seen this week
Competing with bext ofc 🙂
what is it? curiosity intensifies
Check this out 🙂
https://github.com/ublue-os/bext
aaah nice!
@bketelsen dropping hits like 🔥
Hoping zed comes to Linux.
dude that IDE is really nice. I think once it hits linux and their plugin system gets a bit more advanced it will be killer
their LSP docs were a bit lacking the last time i checked it out
they have plans for a flatpak too
I am watching it with a keen eye
that was all @tulip !!
i just merged it lol
It's super good. I was disappointed when I found it and didn't see a Linux version but that's awesome news
I bet when it's ready that it will shred.
Lapce has been around for a while on Linux
Tried it out for a bit myself
if anyone is interested, I'm experimenting with launching the linuxserver/steamos container with gamescope to run a full steamdeck session, seems to work 👍
I remember on bazzite (I think?) you reverted the use of steam from container to host, maybe this could help?
I remember on bazzite (I think?) you reverted the use of steam from container to host, maybe this could help?
the slowness is mainly my PC being a potato, while OBS was screencasting
not everything works OOTB, maybe needs some missing dependencies (not sure) or surely be either rootful or running in docker (which is rootful)
This is sweet. Getting a Steam Deck session working in Bazzite Arch would be awesome
It may be useful to develop/debug apps for steamos maybe
For sure 🙂
https://github.com/wolfi-dev/os/issues/13254
forgot to file these before I left
I'm thinking package what this is bringing in within each package, add them as build time deps, copy the files in each of those over to containers common. Better than cloning several packages and tracking all the different versions in one package
mods
is in now, glow
not yetI think I saw a PR last night. I'll see if I can't find it
Looks like it needs an advisory
https://github.com/wolfi-dev/os/pull/13312
oh neato, what does this mean?
It gives us a way to track package vulnerabilities. To add it you pretty much clone this
https://github.com/wolfi-dev/advisories
Along with the OS repo, install wolfictl, and in the advisories repo run
wolfictl adv create
I can look into glow specifically here in a minute. That replacement doesn't look right to meGitHub
GitHub - wolfi-dev/advisories: Security advisory data for Wolfi
Security advisory data for Wolfi. Contribute to wolfi-dev/advisories development by creating an account on GitHub.
that is so awesome
Yup. It's pretty amazing.
I'm hyped because melange just got an interactive mode for debugging packages. Figuring out what's going on with packages is so much easier now. Added a debug target to our Makefiles that uses it. Things are constantly improving
Here's to hoping inshellisense gets merged soon. 😄
I tried on a clean VM install and the setup is really, really awesome.
but I ran out of time so still wanna test some things
I haven't used it much since it was released. When I tried it, it has builtin completion support for bash and what not. Does it support all custom completions loaded?
If it's in path on the container tab complete name and tab complete options will work.
If it's going to run from the host and doesn't have the name in path it will tab complete once the name is typed
@89luca89 Updated the Wolfi PR for distrobox :). Let me know if anything needs to be adjusted
@Kyle Gospo I love how boxbuddy has boxes in a menu
I wonder if we can add a preferred default there, or at a minimum it would be nice to have a "ublue" section where we can put our optimized boxes
Perhaps. Boxbuddy also merged that Ptyxis PR but it's not the default
So we need to configure that
ack, I have PRs in bluefin to switch to it at your convenience, and then I'll just snag when you set it as default.
this is way better for normal people
not only merged I see it in the flatpak release. NICE.
thanks a lot! Added a comment 👍
Thank you! Will adjust that here in a bit. Going to touch grass
Should be ready to go 🙂
thanks, the CI is failing now for wolfi,
sudo-rs
is a conflicting package
Also I think it's not feature-par with sudo
in the config department (like allowing nopass) so I think we should just drop this packageI forgot - sudo-rs provides its own su binary in conflict with util-linux-login. Nothing to be done upstream because it's intentional that if be stand in.
su
functions with either package. Will drop in the morning 🙂
Except maybe add a provides
Got it, I think it's not a problem to drop sudo-rs, it won't do what I need sudo to do anyway 🤷♂️
Fake sudo via su-exec root is probably good enough.
That's what we have in our build right now.
Yea it covers all the stuff needed
Updated the PR but forgot the one change. Will adjust shortly
At the Laundromat ATM
This bug has just been fixed in ptyxis https://gitlab.gnome.org/GNOME/ptyxis/-/issues/31
GitLab
Error if distrobox container has not been entered once (#31) · Issu...
On openSUSE Aeon, after a reboot, Prompt gives an error if the distrobox container specified in the profile is not started.
Oh my gosh, that's awesome!
I remember there were some workarounds in place to start stuff at login for this bug, I think you can remove those when they release the fixed version 👍
oh dang this is awesome!
hey this makes our life easier with the preshipped profiles right?
like is this what causes distrobox to nope out and just create a blank new box because the existing one isn't started yet?
sweet!
I didn't have this problem by using boxes created by distrobox itself
Need to test quadlet but anyway we will need to wait for the fix to be released 👍
yeah also as a matter of coincidence we're getting fixes to the first launch wizard thing, so we can totally put "Set up my boxes" as checkboxes in yafti and then have that call the right just things and then it'll all be nice and set up when the user finishes installing.
I'd love to have the left bar there populated with our boxes/service units ready to go.
but I've been in ISO land the past few weeks
If they are marked as distroboxes and "distrobox list" shows them, I think boxbuddy should show them
Our boxes don't exist until the target is enabled.
COPR of ptyxis done building, kicking off a bluefin build
mmh then I think it's something else distrobox can't know in advance this (and by conseguence ptyxis and boxbuddy)
playing around with systemd services with distrobox enter. Looks like not a lot is actually needed for a oneshot service to work.
Only needs HOME and USER set along with which container at a minimum. For custom HOME you need to also set DISTROBOX_HOST_HOME.
Rootless podman can be setup in a user's
~/.config/systemd/user
. Rootful can be done by placing in /etc/systemd/system/
May have to see how to turn this into a little script