I hope access to Code Review in GitHub will be available to paid users soon! It's going to be a game changer as a first level of code review before real people get involved with the PR.
We're both offering managed GitHub Actions runners - some of the differences include:
- Depot runners are hosted in AWS us-east-1, which has implications for network speed, cache speed, access to internet services, etc. (BuildJet is hosted in Europe - maybe Hetzner?)
- Also thanks to AWS: each runner has a dedicated public IP address, so you're not sharing any third-party rate limits (e.g. Docker Hub) with other users
- We have an option to deploy the runners in your own AWS account or VPC-peer with your VPC
- We're integrating Actions runners with the acceleration tech we've built for container builds, starting with distributed caching
Does anyone know of any books or resources that focus on the concept of a back-end application that allows you to stream video from A to Z in a performant way like Google Meet or Jitsi?
I find it exciting to have a tool that allows up to 100 participants at once right in the browser.
I don't think there is comprehensive information about that. We all do pretty much the same, implement the Selective Forwarding Unit (SFU) model.
The idea is the SFU will receive a number of streams for each user (usually up to 3, in low, mid and high resolutions) and then it will forward one of them to each other participant depending on things like available bandwidth, requested video size, etc.
There are a number of Open Source implementations available, in case you want to study it deeper:
Jitsi was (is?) based on WebRTC. Usual challenges about conferencing is, beyond 2 members, having n*m bandwidth is bad, and a solution is to centralize the streams.
That said, there are some images I wish I'd never seen.
If I could be sure it was only being used for good (by my definition of the word), that would make me eager to install a magical perfect opaque filter algorithm.
But it won't be perfect, and it won't only be used for what I consider to be good.
It took a community drumbeat and persistence from an enterprise customer to get the status message to even show a problem last month. [1]
It does suck and I do think there must be some political infighting going on that the service is having so many disruptions.
There’s no excuse for something this important to not only have so much unplanned downtime, but no resources to connect with the community by offering post mortems or other reasonable interactions.
That said, I’m still all in on GA. It’s amazing and the coupling with repos is great. It continues to be subtly refined. So I just hope whoever is holding this product back gets out of the way.
All I know is that it doesn't seem like a wise choice to be locked into GitHub features or even use their tools with these frequent downtime episodes.
If a large open-source organisation was to rely on say, GitHub Actions for example, well you'll probably see more and more of "GitHub down" posts and they'll be unable to push that critical patch or run that cloud CI on GitHub, and some maybe considering solutions like this [0].
Every time this happens, you'll be completely locked in and ending up contacting / complaining to the CEO of GitHub for support via Twitter.
Good deal for us, actually. I used to get upset about this, but the alternative really does suck more.
You can either deal with the occasional non-productivity from a SAAS offering (which for GH has never lasted more than ~half a work day), or you can spin up all your own stack on-prem and generate 10 additional full-time problems in the quest to solve this one periodic issue.
The trick is to never put yourself in a position where your tools absolutely must work immediately or you lose a customer. Why make a promise on delivering a piece of software until its already in hand? Also, if github goes down and I really wanted to get an issue comment in, I can just open a text editor and keep a note around in my local repo until everything is back up. I can even do some crazy things, like share my local branches with other developers over side channels until things get back to normal in the centralized system.
Wasn't this kinda the entire point of talking developers into moving to the git model? Would be fun to rewind the clock and use these takes as an argument for sticking with TFS, et. al.
I agree with you. To me it just sounds like people being entitled, expecting a service to never go down, ever. Shit happens and things go down. Design your processes around that fact and have procedures lined up for when it does happen.
However, saying "just get GitLab and deploy it to your own server" glosses over the huge time sink it is, especially for small companies that are already short-staffed, to maintain something like that. I sure as heck do not want to be responsible for keeping my GitLab server up.
As you say, if you're writing an issue, put it in an editor issues.md file or something. If you're working on code, even better, just commit locally.
Hosting Gitlab hasn’t been a huge time sink for me and I’m a one-person show who is conscious about my time. I set up automatic updates and haven’t SSH’d into that machine in about a year.
I agree that in many scenarios you will find that your approach is perfectly valid.
For us, we have ~8 people that need to use the system all at the same time. We utilize issues very heavily (we are entering 5 figures), with lots of data-heavy QA content throughout (screenshots/videos/binaries/etc). Additionally, our customer environments are actually configured to talk directly to our GitHub repository for purposes of rebuilding themselves from source at update time.
Because of the number of participants who are involved with our particular usage of GitHub, we find that a hosted solution with horizontal scalability and resilience to be an excellent fit. We have made the decision to make it Microsoft's problem to figure out how to eventually deal with 10k+ issues and 200+ employees/clients trying to hit the same host all at the same time.
If we had decided to host our own GitHub/Lab server in our cloud environment, we would be having to constantly review the capacity of the IT systems. As we add employees and customers, the load we put on our source control solution will increase linearly. Additionally, because of the deploy-time approach, having a solution that is backed by someone else's network means that we don't have to worry about our private network being slammed by outside requests. Our total checkout is nearing a gigabyte, so you can see how this might scale poorly if we operated out of our own infrastructure.
I almost feel like we are abusive of Microsoft's generosity considering the sheer amount of content we have throughout our organization's account. Every day I wonder when I am going to get some email demanding that we switch to a more expensive enterprise plan because of how we use the service. Maybe that day will never come. Even if it does, I will gladly shell out for the bigger contract.
I suppose it is different for you because you are all in private repositories but... have you seen github repos for very popular open source projects? Angular (the framework) alone has nearly 20k issues, the cli repo another 10k. React another 10k, golang 41k (35k closed, 6k open!)
I have to imagine that you're not exactly a small fish, but also not making them sweat too much either.
There's not much special about github actions. All the data needed to repo the actions without github is there. I'd be surprised if someone hasn't already made a "run your github actions outside github" system somewhere.
If you're working on a serious project, hosting it mainly on GitHub via Git and don't already have a backup solution in place, I'm afraid you're late. But better late than never! Make sure you can always deploy when less reliable services are down, and GitHub has always been one of those. Git makes it incredibly easy as well, as long as you have your CI/CD externalized already.
I think if revenue or product quality is tied to a VCS, having an active-active or active-passive setup is the way to go.
Fortunately, I'm on an on-prem product so that investment hasn't seemed worth it yet.
This doesn't mean we don't escrow our code, but rather than try to rebuild from source, I just take a short coffee break and wait for the impacted service to come back up :)
I'm consulting for a company that uses Azure DevOps and I cannot believe how much harder it is to use than Github Enterprise for getting things done. The documentation is also strictly worse, and localization is just not there at all.
-edit-
I assume someone who works on Azure DevOps might be looking, so a few small specific things so you don't think I'm just a hater.
- It is hard to use markdown consistently, and it is particularly painful when doing any project management work on DevOps (which I assume is one of its strong points)
- The lack of a Japanese menu really sucks for something that is supposedly aimed at enterprises. Having to explain both native and English language vocabulary terms is double plus ungood (again, in an enterprise product put out by a company that has usually done excellent business in Japan)
- It's baffling that I can't make changes to the template when you give me a template option. I assume it is a permissions issue, but really?
I personally know multiple senior ex-githubbers who quit over the ICE contracts; it seems plausible to me that they’ve lost key expertise and can’t safely integrate with their older systems.
Based off these reports most of their recent issues have been them hitting scale limits for their MySQL configurations and not having sufficient monitoring.
I recently left a project using AWS for one using Azure. I thought the AWS API's were inconsistent and janky but they look great compared to Azure. Azure is also extremely slow to perform actions in my experience and the documentation is very heavily tilted towards being sales funnels. I do like the keyvault service and the idea of resource groups. The whole tenant / subscription / roles / user mess of permissions not so much, but I expected that from Microsoft.
The AWS API is just fragmented. Too many teams that probably didn't communicate very closely. That being said, you can sort of follow the logic - or the multiple logics.
But that's really the only negative.
I'm trying to find a single term to describe all of Azure, and I'm having difficulty with it. Sophomoric? Like a place where the leaders are a bunch of B-class players, who lead all sorts of C-, D- and all the way down to Z-class characters.
And hopefully you'll never need support with Azure, especially urgent support, because it's atrocious.
What specific issues are you having? What do you mean by "B-class players, who lead all sorts of C-, D- and all the way down to Z-class characters"? I assume in this ranking you are an A+ player yourself, which is very impressive.