Your first question is context-sensitive. For example, many OSS projects have multi-hour builds, especially when extensive E2E testing is required.
As for GitHub Actions' built-in runners, they are free (within reasonable usage limits) and work well for OSS projects. However, they can become expensive very quickly for private repositories. One of my recent adventures involved migrating GA runners to AWS, this reduced GitHub bills by a solid $20k per month, at a company that wasn't nearly as big as that number might suggest. Even with Amazon's outrageous pricing, the migration made the CI runs and ephemeral environments significantly cheaper.
A few things to keep in mind about GitHub Actions' standard runners though:
1. They will be slower than your own server.
2. There's no support for Windows on ARM yet, and macOS x86-64 requires using an outdated runner image.
3. Expect occasional unexplained outages. GitHub hasn’t been particularly reliable lately, and when issues arise on their end, there’s often little you can do but wait.
4. Be cautious archiving artifacts, they add up quickly.
Hopefully this is helpful.
Yeah thanks for the info! I have no intentions of using GitHub for anything personal but worth getting an idea.
My E2EE testing can take upwards of 10 minutes now and run as a separate step (normal) I was just considering the minimal build time from source to running application binaries.
I've been trying to gauge this usage because we were considering offering CI as an option for nostr:npub1s3ht77dq4zqnya8vjun5jp3p44pr794ru36d0ltxu65chljw8xjqd975wz but I was cautious because I know just building 2-3 of my projects at one time with testing can keep a cluster node busy enough it slows down the pipeline. With literally one client, me. If we extended that to a ~30 projects I'd need to scale to like full 48u rack full of compute, or come up with some kind of crafty scheduling scheme. Not sure if Jenkins allows for better hardware scheduling.
It would probably become pretty unaffordable to offer CI as a service on cloude compute.
Makes sense for sure. I think GitHub Actions should be fine for GitCitadel’s own development needs. Plenty of massive OSS projects using it. GitHub’s fair usage policies are pretty reasonable. I’ve hit some GitHub API usage limits in the past, but that was on me for pushing it too far and not properly optimising some heavier workflows.
As for building CI as a service for others on top of GitHub’s infrastructure… that’s a bit trickier. The limits are generous, but not that generous. Plus, I’m not sure if it would violate their usage terms. It might be worth reading up on or even reaching out to them.
https://docs.github.com/en/actions/administering-github-actions/usage-limits-billing-and-administration
https://docs.github.com/en/site-policy/github-terms/github-terms-of-service
https://docs.github.com/en/site-policy/github-terms/github-terms-for-additional-products-and-features#a-actions-usage
From the last link above:
Actions should not be used for:
* The provision of a stand-alone or integrated application or service offering the Actions product or service, or any elements of the Actions product or service, for commercial purposes;
* Any activity that places a burden on our servers, where that burden is disproportionate to the benefits provided to users (for example, don't use Actions as a content delivery network or as part of a serverless application, but a low benefit Action could be ok if it’s also low burden); or
* If using GitHub-hosted runners, any other activity unrelated to the production, testing, deployment, or publication of the software project associated with the repository where GitHub Actions are used
Thread collapsed
Jenkins' specialty is the job scheduling and parallelization and dispersal.
Well guess I haven't used it enough then. I had a really hard time getting it configured last I set it up, I had run a few jobs on it but was running into agent issues and switched to oneDev in my search and here we are. OneDev doesn't have any systems to manage hardware queuing. I can get very good with paralellization and caching, but just wasn't made for scheduling and queuing. This might be solvable with scripts but we just shouldn't need it. No team really should.
I also remember Jenkins also being built for teams, not SaaS use so we'd need an abstraction layer, assuming it has a decen't api.
Couldn't you just assign people agents, or something, and track that way? Have a limit, per agent, and then throttle after that? Just make it part of a premium offering.
I just don't think onedev is good for the job in that case. Jenkins can at least be run outside of the repository as an extension service.
Kind of I suppose. It would be over my head at the moment. You can assign executors to a job, and agents to an executor, but throttling doesn't really solve a schedueling issue.
It's a backward stop-gap imo not worth using if starting from scratch. Real hardware scheduling would be better imo. So user Y's job gets queued after user X's and make more efficient use of the agent service.
Stacking multiple agents on the same machine just consumes more resources that are already expensive. The goal would be to use the same resources and just better schedule use of them.
I'm thinking that we should just try to launch fast and extremely minimal, with the Nostr login/messaging feature implemented on an HTTP git server and coordinated with the relay, and then create something more fancy and feature-rich later, and offer that for a premium rate.
It could even be two distinct git servers on two different machines, as the repos will be identifiable by their ability to write to the wss://thecitadel.nostr1.com relay, and the repo events the subscribers upload would all appear together on the GitWorkshop page, regardless of which services they rent from us. We could add a little 🛡️ emblem to premium subscribers, or something, to make them stand out from the crowd a bit.
But that all seems like something we could do, later.
Oh yeah I'm actually hoping it's completely separate, and I have no inetentions of implementing CI soon. No feature creep. Hoping for a very basic git remote compatible with git and ngit.
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
So basically I'm playing with duplo blocks. I'm very okay with this. I always wanted easily repeatable builds, that are fairly quick and a stable build system that doesn't rely on any specific infrastructure.
Thread collapsed