70 Replies
So... I found this about setting up an RPM repository in Github pages: https://jon.sprig.gs/blog/post/2835
A nice guy's view on life
Using Github Actions to create Debian (DEB) and RedHat (RPM) Packag...
Last week I created a post talking about the new project I’ve started on Github called “Terminate-Notice” (which in hindsight isn’t very accurate – at best it’s …
this may be a solution for us? I'm not sure if there is a max size for static content in a Github pages site.
@j0rge @bsherman @Kyle Gospo
I know I've seen something like this before, though maybe not this particular post.
But this is exactly one of the things I wanted to investigate.
I'm assuming we are using Github CI to build our existing RPMs?
probably in configs repo?
We build some in config some in akmods, some in ucore-kmods
And beyond that anything we build is a COPR. But I’m also curious about building a few more things that currently in negativo17 repos
I'm down to try it with Mesa, we need a va-drivers package that doesn't randomly break
Copr in my case
We build only a couple things in GitHub
Only thing is that opens us up to h264 licensing issues
So maybe it's better to just deal w/ RPM fusion....
I have not imagined replacing rpmfusion
Only negativo17, and putting our ublue nvidia, just, adding, config and kmod builds into some kind of repo instead of being layers
I bring it up because right now hardware video decode is broken on main
Because rpmfusion hasn't built Mesa yet
Gotcha.
Might be fixed as of today since Fedora updated mesa
But those sorts of hiccups keep happening
Yeah, I would say test this concept iteratively
I can work on it. And try pushing just our config/addons/just type rpms. If that’s good, add the kmods , and if that’s all working good, start on negativo17.
Finally if all that is solid for us we can think about rpmfusion, but that’s a lot to bite off and definitely not plausible until the model is proven.
nod, and didn't we start a packages repo as a start to this?
that's spec files for COPRs
yeah I was hoping we'd have one nice centralized thing
i would like that too...
if no one is in a rush ... i'm happy to work on this...
❤️
i'm eager to finish some workflow changes on ucore, then this is my next goal
like, if anything just to have the rpm's building in our infra so we can see them all going along with the images would be awesome
What a glorious merge queue it would be
i can't find a suitably glorious emoji for that comment
I might be able to try and help on this too. Good opportunity to shoulder surf to learn GitHub workflows?
sure
happy to work with you on it
@bsherman I created and invited you to this: https://github.com/orgs/ublue-os-repo-test
I'm gonna start poking at this for a little bit.
https://github.com/ublue-os-repo-test/ublue-os-repo-test.github.io
OK. Have had a few thoughts since looking over the github workflows that this blog uses: https://jon.sprig.gs/blog/post/2835
He has 2 different workflows to make his setup happen.
He has 1 workflow which actually builds the RPM and deb packages and makes them available in Github Releases.
https://github.com/terminate-notice/terminate-notice/blob/main/.github/workflows/release.yml
He has a 2nd workflow which builds the actual repository here: https://github.com/terminate-notice/terminate-notice.github.io/blob/main/.github/workflows/repo.yml
TLDR: He is building the packages using his source code through Github Action, making them available as a release, and then ingesting them into a repo that has a front end page available with Jekyll.
So a couple questions...
We build some of our packages in Copr, we have some in RPMFusion, and some from Negativo.
What amount of effort would it take to retool what we are doing in Copr and making it available in our Github Repo? OR another thought I just had is what if we just grab the RPMs directly from our sources we use and just mirror them?
because realistically all I see the repo action doing is grabbing the RPM from the github releases page, ingesting it, and signing it.
In the second scenario, as long as we can grab the RPMs we need directly, we could just mirror them? Not really sure what the maximum amount of space you can store in a single Github Repo.
And I know we don't want to rebuild all of RPMFusion 🙂
Also this is a note for myself on how I built my hello-world rpm spec file. Figured we could use it as a test: https://earthly.dev/blog/creating-and-hosting-your-own-rpm-packages-and-yum-repo/
@Kyle Gospo I see, so anything we are doing in Copr has our sources in Github anyway: https://github.com/KyleGospo/prompt
so we could technically release it in Github as an RPM in the releases section.
I need to stew on this for a bit 😛
My thoughts have been not to follow jon.sprig.gs workflows, but rather take inspiration from the final state and maybe find some detail which is helpful in the actual repo generation process and how that fits into a github action
i don't actually think we need the jekyll front end, but it could be useful
For sure. That was my thought too. I was just looking at a working example.
lets scope this for a bit, i've tried to say it a few times... but restating here for future readers...
priorities of our goal of testing out a repo:
1) any RPMs we build in github can start getting published to our own ublue repo (akmods, config, etc)
2) any RPMs we source from other github releases could be pulled in and published in our own repo
3) any RPMs (and deps) we source from negativo17 could start getting published into our own repo
after the above is working well, and only then... I think we can consider more ambitious goals like maybe:
4) converting from COPR to a github only build and publish system for all our COPR stuff in our own repo
5) considering a custom build of anything we have been sourcing from rpmfusion and putting that in our own repo
and maybe 1 and 2 get swapped for the purposes of easiest to test? i'm indifferent...
Scoping is good. Thank you for pulling things back to earth 😄 I was thinking a bit more holistically.
we could certainly be putting RPMs we build into releases, but i think that gets messy very quickly because releases are tied to git tags... and we build multiple different packages in a single repo, daily, and they don't get versioned together...
instead, we need to build N things, each potentially getting rebuilt on a different schedule relative to their own upstream and other deps like a package itself my get rebuilt if it changes version or if it's a kmod and our builder's kernel version changed.
my general thought is:
A) determine the plan for hosting the repo
B) determine the pattern for publishing to the repo
C) build a pattern that can predict "oh, i have this rpm spec/source/upstream and i would build/publish it, but that package-version-combo already exists in repo, so no need to do rebuild/republish"
so, C may be some reusable workflow we end up running many times a day with different inputs, but, most of those runs will short-circuit to a successful (or skipped) build since we'll already have the artifact in the repo
i'm not sure if I've put all that down in an orderly, (hopefully) easy to understand way before... i've probably just babbled about it
all in all, i have this pretty clear in my head 🙂 i just need to do it LOL
my concern after asking around is getting rate limited by github, so just asking, this would be CI only right? not expecting to be consumed by end users?
that is a legit concern
ghcr.io is not a problem, but github is not in the rpm repo business
like the CDN used by this is not the same as ghcr.io, packages, etc.
we've definitely said we don't want this to be an end-user repo, and i don't think we'd include a config for it in an end user image (eg, silverblue-main or bluefin), but it would be hard to stop users who figure it out
but I think staying inside github's network should be fine
ack
this kinda falls back to the idea of us building the repo itself on a VM somewhere... i'm kinda curious if we setup a free cloudflare account for it, and tossed the repo behind that how it would do 😄
the repo machine is basically a dumb http server, but needs to have it's contents indexed/updated whenever we publish artifacts
Yeah, I know this guy uses GitHub pages as his CDN, but it might be a smaller project with not a ton of users. https://jon.sprig.gs/blog/post/2835
right, i'm not sure if github pages is different from "github raw"
i'm inclined to try the github raw/pages approach, but also mirror it and be ready should we need to move to an external VM solution
Realistically, these would only be used for stuff baked into images.
that's the intent
but my biggest expected caveat there is... custom image builders, based on ublue main/nvidia/asus/surface images would expect to have access to our kmods at the very least, since we talk about them and advertise them... so if they move from container layers to repo... well... that's a thing. how do they get them for their image builds?
and I, as a user, am in that bucket 😄 I build my own images from
FROM *-main
and add kmods from akmods
I think for me the bigger smoke test is getting an RPM repo built and then ingesting RPM files into it.
And seeing if we can make it private somehow.
private how? private only to github actions? or private only to our org?
if i want to build an image at home, even as a developer testing this... i wouldn't be able to do so anymore because the repo would be private
I am starting to believe an actual private repo is a total non-starter.
That actually is a good point.
Because yeah, if you build it at home or in your own GitHub Org, you wouldnt have access to the package repos.
the question is really about what will the usage be on a "publically available" repo which is not publicized for general usage, but those who build images can still use.
I would think quite a bit even by just us because we are doing builds almost every day right?
yes, but even if we doubled our build count, that's still not a lot of traffic.
I'm replying to myself. I think this is the way.
Lets build the repo in github... but if it grows too much, we mirror it, and only allow remote access, or something...
Sounds good to me. I already created a separate org for testing so we don't create an additional repo in Ublue org until we are ready.
i'm an "owner" 😉 thank you
You should feel honored 😉
So the theory works!
I've done a VERY basic Hello World with this.
no signing gpg keys or anything yet.
GitHub
GitHub - ublue-os-repo-test/ublue-os-repo-test.github.io: RPM Repos...
RPM Repository for custom RPMs. Contribute to ublue-os-repo-test/ublue-os-repo-test.github.io development by creating an account on GitHub.
This repo should allow you to install a simple hello-world binary that is packaged into an RPM.
yum install hello-world
after adding the repo above.cool
i've been thinking about this too, so this a great starting point
@bsherman and I had a working session to discuss options. At this point, we've ruled out trying to use Github for this initiative. (for the RPM Repo portion of it)
@bsherman you'll be happy to know bling is now gone from bazzite and soon Bluefin
did we rule out OBS?
obs?
Like if we're not using github for the repo then the purpose for having the packages is to just remove things like the negativo dep and to centralize the spec files?
i have looked into OBS in the past... it's like COPR...
I don't think we'll be able to build akmods there
https://build.opensuse.org/
ack
alright so we're keeping copr then?
the main goal of the repo is not to avoid negativo/copr/rpmfusion
it's to have a repo to host our kmods (plus our other home-grown RPMs)
a secondary goal would be to have our own trusted mirror, with a known mirror sync schedule... and we could mirror rpmfusion/negativo (and maybe our own copr?)
ack
@dogphilosopher and I are thinking a good path forward would be for me to POC this scoped only to the ucore-kmods... if that works, we can propose it to team for a vote, or just cowboy it... and start moving the ublue-os/akmods into it as well
i will do another pass on OBS docs to see if it could work for us... who knows 🙂
I like moving akmods out of our build process to remove the temporary layer (which will make our images smaller) and it will simplify the code.
I can take a look at this too.
I'm just wondering if we use OBS if that just moves the problem somewhere else 😛
apparently you can host your own: https://openbuildservice.org/download/
it could potentially solve the problem of only having kmod RPMs available in image layers...
it would not provde a trusted mirror
i'd rather have dead simple yum repo and use github actions than host that, though
yeah.
You can discuss the Open Build Service with us using the [email protected] mailing list. But bear in mind that the Open Build Service is a community support project. So please, if you plan to build packages, make sure your package really adds something new to the community you talk to people that are working on similar packages or topics to rather help on existing packages than duplicating packages you let people know about what you're doing to find other interested community members. Mailinglists are the right place to do that. Always remember, regardless how much build power is added to the build service, it can be eaten up by not so usefull packages 😉I'm not sure they'd welcome our kmods 😉 even if they can be built
yeah....
a year ago or so... there was a branch/PR of openrazer kmod git repo... specifically to enable akmod builds
but it got closed because OBS didn't want to work with akmod
i was following that closely at the time because I wanted that on my silverblue system
now, Kyle builds openrazer in our COPR and we build the kmod (using akmod) in github
that's the experience which turned me off on OBS
fair enough. It's good it was brought up as an option though. I didn't even think of it.
I have good relationships with people at suse if it's a matter of asking for resources
but if it's a license thing they're going to nack anyway ...
yeah, that might be a non-starter.
either way. Gonna go eat some dinner. Good talking with you @bsherman, lemme know if you need some help with stuff!
meh, i'm reading Terms of Service...
IANAL