A lot of the conclusions they're drawing in this post about the "agentic era" seem quite misguided and some don't really seem to make sense.
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
Let these people keep betting their companies, futures and net competency on text autocomplete. The future is bright for me and everyone else that isn't falling for it.
Reminds me of when microwaves first came out. Investors decided to go all in on "vibe cooking" (lit. cooking with vibrations) complete with microwave ranges (no conventional oven), until the public wizened up to the fact that there was in fact no cooking (Maillard reaction) involved in their vibe cooking. Took about 15-20 years but microwaves finally took their rightful place as a utility appliance rather than what they were touted as (a centerpiece). Pick up a microwave cookbook from the 50s for some laughs.
I sure hope you're not mocking the classic "Microwave cooking for one" book!
The mallard reaction is very possible in microwaves,
but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.
That book came out much later than what I am talking about, when many workarounds like turn tables (and indeed, specialized crockery) were made available. This thing [0] for example, did not even have a turn table, and yet was created in an "all in" form factor for the American home. It was in production for nine years.
Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.
They sure are great for reheating food though. The problem is that a lot of developers think they are Michelin chefs when in reality they are Olive Garden cooks reheating frozen meals.
But I think the argument that microwaves are basically for heating things up and for essentially steaming a lot of vegetables. (I'll do one ear of corn in the microwave with pepper and spices.) I do have a thick microwave cookbook from the 70s or 80s but I've mostly only ever used it for vegetable cooking times. And probably less since I started roasting vegetables in the oven a lot of the time. I have cooked some of the other recipes but not for a very long time.
Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).
It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.
I hope my kid doesn't get the same kind of memories about my weekend projects.
You are obviously right and I see examples of it everywhere.
E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.
Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).
And that is the reason why it is only autocomplete. You probably had less context than the poster before, so it could not mix stuff up.
The poster before either had more memory or the search searched through more topics. And btw it’s really hard to only give access to some things.
Yeah no human has ever made a mistake, good odds here! Luckily hubris is only a trait found in fictional stories and there are no real world accounts of people making terrible decisions while others go along with their terrible decisions.
Can't wait until we're forced to bail out these companies or else China will take over the open internet (or whatever bullshit they use to justify the bailouts).
If you ignore all the complexity and discard every detail, it’s literally just a box. Yet curiously you aren’t living in a cardboard box, or an aluminum shed.
Point being, which you know and are being willfully ignorant about, is that it’s more complex than that. And you’ve neatly discarded the detail that they’re multi modal.
I will freely admit though, analogy is useless when interacting with someone who has already made up their mind.
Well that’s the problem with arguing through analogy, when you’re wrong you just come out looking like a Fucking Dork.
Truly your analogy isn’t wrong, it’s just that you are wrong. A house is a box and it needs walls on each side and a roof like a box. But it’s good to remember that it’s supposed to be a box. That might sound overly reductive, but it’s important to keep the fundamentals in mind in case someone tries to sell you a bowl like it was a house. A bowl does not have a lid, and will not keep the elements out.
Similarly, people keep trying to sell token prediction machines as if they are capable of thought. They are not. They predict the next token, nothing more. Yet people keep trying to sell them as if they can think.
It’s overly reductive to the point of being wrong yes. You’re being intentionally obtuse. Then again your brain is just some electric meat so why would you be capable of more?
Good job calling it a token predictor instead of text autocomplete btw, you’re making progress, my wonderful analogies have already begun to enlighten you. You’ll achieve some nuance yet.
I'm pretty sure it was sold as a house. That you understand that you can think of it as a box doesn't make it not a house. That's the point of the analogy.
It's literally how they work. I think the magic that none of us really expected is that our languages, human and computer, are absurdly redundant. But I think it makes sense, in hindsight at least. When we say things it's usually not to add novel or unexpected information that comes out of nowhere, but to elaborate or illustrate a point that could often be summed up in 5 words. This response is perfect sample of such.
A lot of people suspected that most programs were absurdly redundant and for a very long time. The real issue is that the languages do not really allow for producing code that can be easily shared. Some of the functional languages do, but mostly in ways that are completely irrelevant and useless in practice for such a goal.
Programmers writing their fiftieth mostly identical CRUD handler may not have noticed but a lot of other people did.
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
> Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
> Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do this - wait for it - by emitting tokens. Which are then parsed into a function call.
You’re just confusing a harness around an LLM for something more. And the core, the LLM takes input tokens and outputs the most likely next tokens. Those tokens might be interpreted into a tool call or anything else, but it’s still just token prediction.
If you disagree, explain what the actual difference is. I claim that LLMs “use” tools by emitting tokens which are taken and passed to a tool call. If you disagree, how?
> And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).
Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.
Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D
My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)
The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?
Yes, AI should be used as a tool for very specific things. Ones it’s trained on everything it’s completely useless. Anyone who is trying to use it for everyone will fail. I predict by 2030 (if not much sooner) ai bubble will burst. The only good outcome will be all this hardware used will be lequdated for pennies. Mark this prediction it will happen ;-)
Calling the technology "text auto complete" is not productive to the discussion. Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction, but now it's common place. As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum. You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
> Calling the technology "text auto complete" is not productive to the discussion.
If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?
> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction
Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.
What lessons did we learn?
1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.
2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.
It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.
It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.
---
> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.
Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.
That's when we got the first instances of the Eliza effect.
> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
These people will never admit they actually don't understand technology and just attack others. It's a great strategy if you're trying to proselytize a business into a religious movement, and it seems to be working on those that induce psychosis to themselves.
Is "text autocomplete" supposed to be an insult? To text auto-complete a physicist I would have to understand physics as well as them. To text-autocomplete your words I would need to model your brain.
Text autocomplete is just a descriptor of the technology.
To do text auto-completion all you need to know if the probabilities of words appearing after other words, no understanding of those words or the meanings behind them is necessary.
It's not attention that's the problem, it's how we train networks offline with backprop.
LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).
We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.
Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.
Sufficiently good text autocomplete is indistinguishable from intelligence to an impartial observer, and that's the only honest criterion for intelligence.
I'm a little shocked that people discussing this topic could be so far apart! I'm completely serious.
Have you ever thought about how you would determine if an arbitrary given entity is intelligent or not? I think you'll agree it would require some kind of test. You might agree that the test would have to involve bidirectional interaction (since otherwise it would be impossible to distinguish an actual person from a recording of one).
This retort doesn't make any sense. Take humanity back perhaps 40k years ago and language did not even yet exist. Our token base was 0. Put an LLM in that scenario and it will endlessly cycle on nothing and produce nothing, stuck in a snapshot in time. Put humans in that situation, and soon enough you get us.
This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.
If that is the argument though, current AI aren't just autocomplete - because we could reasonably show an AI an image or a video and have them call a tool rather than return text. That'd be comparable to a pre-language human.
I'm not seeing the comparison because what you're describing is not at all an internal or emergent process. Without a human there to hand-feed the AI access to create these tools, create the interface, and then hand feed all of it the LLM, none of this happens. This is like Kubrick's monolith, bumped up a few orders of magnitude.
> I'm not seeing the comparison because what you're describing is not at all an internal or emergent process.
And (1) that process isn't an autocomplete because it isn't reliably text->text. It could be visual->sound, sound-> action, visual->action or visual->visual or any combination. As well as proactively seeking out new stimulus. Modern LLMs aren't limited to text, they just happen to leverage a large model of the language as their most novel element.
(2) We're not talking about internal or emergent processes, we're talking about intelligence. No human is born intelligent. They go through a pretty similar process to LLMs where a lot of data gets dumped on them and they start responding to it.
Humans obviously need air to breathe. If you give us scuba lessons and extensive gear we can go underwater and keep breathing air, but the fundamental operation doesn't change. It's the same with LLMs. When you send an image to an LLM it's not parsed by the LLM but handed off to a separate program that converts it into a text format for the LLM which then is then sent back to the same old autocomplete process.
And yes humans are born intelligent. Have children and it's the most amazing and beautiful thing watching them begin to create out of nothing. For instance with absolutely zero prompting or external guidance children will begin to engage in intentful make believe play, like starting to share their food with their favorite toy.
But this 'something from nothing' is also essentially required if you think about it. Try to put yourself back in primitive man's shoes some tens of thousands of years ago. You know basically nothing about the world around you, yet in the blink of an eye we've discovered the secrets of the atom, put a man on the Moon, created mathematics from nothing, created all the underlying technology and infrastructure required for us to have our little debate here online, and so much more.
> Take humanity back perhaps 40k years ago and language did not even yet exist.
This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.
It likely started "no later than 150,000 to 200,000 years ago."
> It's literally text autocomplete. You can dress it up however you want but it takes input text and outputs the most likely next sequence.
Last year this level of ignorance and cluelessness was amusing. Nowadays it's just sad and disappointing. It's like looking at a computer and downplaying it as something that just flips switches on and off.
> Yeah they all want to fire the guys who can make sense of the mess the vibe coders are doing and try to stop it.
You're grossly inflating the level of contribution from your average software developer. Are we supposed to believe that the same people who generated the high volume of mess that plagues legacy systems are now somehow suddenly exemplary craftsmen?
Also, it takes a huge volume of wilful ignorance and self delusion to fool yourself into believing that today's vibecoders are anyone other than yesterday's software developers. The criticism you are directing towards vibecoding is actually a criticism of your average developer's output reflecting their skill and know-how once their coding output outpaces or even ignores any kind of feedback from competent and experienced engineers.
What I see is a need to shit on a tool to try to inflate your sense if self worth.
I've seen which developers became vibecoders. They were the people I'd have wished to get rid of.
The ones who never acknowledge a mistake even if the process is crashing; the ones who put "return true" in a test so that the test doesn't execute and will insist that you broke their code if you remove the return true and when the test actually runs it fails; the ones who read a blog post about some new thing and decide we need to do like that; the ones who will write code that fails and then be nowhere to be seen when there is customer support to do.
> Gitlab is looking to lay off people like him. All major tech companies are currently raiding to fire such employees.
Gitlab has been strapped for cash and desperately seeking a buyer to cash out for years.
If anything, the LLM revolution represents an opportunity that Gitlab is failing to capitalize upon. They have a privileged position to develop pick axes for this gold rush, but apparently they are choosing to dismiss themselves from the race altogether.
Gitlab's decision is being taken in spite of LLMs, not because of them. Enough of this tired meme.
Ahh, are we there yet? Has non-deterministic computer use eroded your mind so much that you are starting to question the binary system? You know, the insight that computers are something that flips switches on and off is rather old, and I have heard it uttered (although slightly humorously) several times already, nobody ever raising any eyebrow hearing it.
Not true, I tried just now. Took 30 seconds of due diligence. You could have done this too. Do better.
The problem is they’ll do what you ask. And if you are the type of non-curious person who replies “ Autocomplete only 'knew' how to output a scraper...”, then you’ll tell it to make you a scraper instead of ask what your options are for getting HN data.
Sounds like you didn't even know what your own tool was doing. This would be a prime example of why relying on autocomplete based tools makes you look like a fool.
This thought that “maybe we are just next token predictors too” is not particularly clever. Most of us have thought about that, but a bit of experience with LLMs make it obvious that’s not what’s going on here. I think it’s a bit like listening to a recording of a person and swearing there’s an actual person in the recording device because the audible output is indistinguishable from the real thing. Why would you do that? You wouldn’t unless you have no idea how a recording device works, in which case it seems like magic.
> a bit of experience with LLMs make it obvious that’s not what’s going on here
I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
A one-way audio channel is indeed too weak for a person to distinguish a person from a recording, but a bidirectional audio channel is easily strong enough: the person can verbally ask the person-or-recording a question and see if it is acknowledged.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.
If you seriously cannot tell what is the difference between a human being and a LLM and think they are both "autocompleters", you know very little about both humans and LLMs.
I cannot. Please explain the difference to me. Everything I've read about neuroscience is basically the brain is a signal prediction machine. The harness, signals and incentive (hormone) systems may be different, but predicting the future is basic building blocks of intelligence.
How much karma do you have on reddit, probably millions.
Only redditors with Dunning-Kruger would say something like this.
I reccomend the podcast Machine Learning Street Talk, you'll find plenty of machine learning PhDs who are also neuroscientists that are completely mystified by the brain, how it works and what intelligence is. The people at the forefront of nueroscience definately aren't reducing human intelligence and experience to "signal processing", or whatever you think it is.
The memo also says they're eliminating a lot of middle management tiers which has been a theme for a lot of companies recently. It's also been a theme historically. Really has nothing to do with AI. It's just the classic executive view that they are paying people who sit in meetings and write emails instead of writing code. Blissfully unaware that meetings and emails are how big organizations function.
> Blissfully unaware that meetings and emails are how big organizations function.
I don't know, I've seen more big organizations that have a dysfunctional amount of middle management and "meetings about meetings" than ones that truly benefit from that culture.
Meetings about meetings are frequently mandatory steps to navigate internal politics. Frame your questions, gather allies, hone your message. It's how you don't waste the time of very busy people that you need things from. Besides, the alternative is fewer managers dealing with 20+ reports and not knowing what anyone is doing. Institutional inertia can be frustrating, but it's also necessary to avoid chaos.
Your argument doesn't make sense. They literally explained why they are doing it. They are looking to remove who can't or won't keep up with ai. That can be managers but also engineers. That's what most companies right now are doing.
Right but naturally that's not actually why they're doing it. In actuality, it's a layoff - they did not go through and analyze which employees are "keeping up" and which aren't, don't be so naive.
This, like virtually all layoffs, is for economic reasons. Of course you can't say that because that reflects poorly on your growth and makes your investors uneasy and yadda yadda yadda. But what do investors like? Hm? AI!
Oh! Oh!!! This is strategic, you see, so we can use even more AI, yes yes that's right mhm.
> they did not go through and analyze which employees are "keeping up" and which aren't, don't be so naive.
they do on the org level. that's not news for anyone who has worked at upper mgmt level in corporations. rule no.1 is you keep your mouth shut about anything there. and of course it's for economic reasons.. it's a business, not a charity to provide lifelong employment for employees who aren't aligned to mgmt goals. Mgmt tells stories depending on who asks. Levels below execute them (by identifying those who aren't aligned).
>Machine-scale infrastructure. [...] Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. [...] Git itself is being reengineered for machine scale.
Git itself is so far down the list of bottlenecks that do or could hamper LLM-driven development, even projecting years into the future...
Git has always been one of the biggest perf bottlenecks inside of the product.
First for any scaled deploy we recommended NFS. We were young and dumb and it was too slow. (We’ve all been there)
Then we went to an RPC model with gitaly and even unwrapped some of the git calls inside of that to speed it up.
Just a few months ago we had a large customer with thousands of devs and a large monorepo ground their deployments to a halt because of a cloning strategy change that introduced an accidental 10x in git calls. Git itself was the bottleneck because it’s not designed for this scale and speed.
For enterprises where thousands of developers are contributing code via git to a centralized system of record, who are firing off 1000s of CI jobs Git is absolutely a bottleneck.
Now with LLM technologies we should easily expect a 5-20x code volume increase on the conservative side. Git is being stretched to its perf limits.
there’s a familiar saying “Markets can remain irrational longer than you can remain solvent.” i think that applies here as well. everyone (customers) want AI; investors demand it. it may eventually calm down but i’m sure many companies will be left behind and ultimately fade away if they don’t keep up until then.
I don't think it would be absurd for them to worsen. If LLMs cause discourse to worsen, but also grow and change, then the trainers are in a conundrum of ignoring new training data or losing track of the zeitgeist.
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
non synthetic pre-training text got exhausted long time ago, now focus is more on quality and rl/post-training.
cost will keep going down and more powerful chips will be available, like it always was.
reinforcement learning doesn't have fixed ceiling, advancements in software will keep happening as well.
things like distilling smaller models will likely become free – ie. through speculative decoding which speeds up larger models (incentive to run during inference) where you have free access to strong distillation (large model's logits are calculated, ie. it's free for pickup for distillation) etc.
And people wonder why there is so much push back against AI. The last thing leadership should do when laying off people is use the term AI. It's the most tone deaf thing you can do.
We don't live in the same world as they do. Saying AI out loud makes line go up, not down. Investors are still eating this shit up, for now at least...
The most important thing is probably to use native UI controls, either AppKit or SwiftUI, rather than Qt or other frameworks, even those that espouse the use of native controls. That will get you a lot of things for free, generally.
Don't install invasive "helpers" or login items in weird locations; don't ask for huge amounts of system permissions without an appropriate introductory flow for your application, explaining what it needs to do before trying to install or get permission for something.
Bundle your dependencies appropriately; don't go fetch weird stuff post-install, the application should be designed so that everything is already there in the bundle.
Put application data in the idiomatic locations; ~/Library/Application Support, ~/Library/Preferences, etc.
Carefully consider what keyboard shortcuts your application uses; if you use the Escape key for something, or modifier keys, make sure it does something that makes sense on the platform and doesn't conflict with other shortcuts.
Don't bundle Chromium.
Generally: Respect the user as much as possible. Things should "just work." If there's a situation where it doesn't or something feels "odd", you should do whatever you reasonably can to fix that.
That first part is a no-go, as the top concern for me is to have consistent UI across three platforms, so it won't look native, and that's a conscious choice, even with the tradeoffs.
It's a native cross-platform application, fully compiled, just one binary with a sidebar archive, so Chrome, post-install fetches and other similarly "weird" desktop experiences are already avoided :)
Thanks a lot for the extensive description of your expectations in general though, it's very helpful and I'll put it to good use!
A "reasonable" answer is probably a primary self-hosted Forgejo instance as the canonical forge, while using GitHub as a mirror solely to take advantage of its free CI, while that lasts, while hosting secrets with a dedicated secret-hosting provider (I don't know what the provider du jour for this is these days).
If the primary forge's only job is to host the actual Git infrastructure (the code, the MRs, the issues, maybe a wiki), it's a lot more simple than GitHub, and probably more within the scope of what people can reasonably administer themselves.
I hosted the first "java.apache.org". I was an early employee at CollabNet, and in the first discussions around starting subversion. I worked on Cloud Foundry.
This stuff isn't easy and I'm more than happy letting someone else do it at the expense of some downtime.
Will I have to patch machines, keep packages updated, deal with SSL certs, maintain action runner infra, deal with billing for the machines, add monitoring, alerts, logging, etc
No, I don't want to be in the business of running my own Github clone. That's what I pay Github for.
Why do you pay salary to employees to buy food when you can just run a farm next to the office and save money by operating the farm and giving the employees food directly? You'd save money by not having to pay as high of salaries, and farms don't even need 24/7 devops teams.
Don't you think the farm example was a bit too extreme for it to make sense? A tech company probably does not have expertise in farming but devOps is something they already know how to do and can easily manage it in-house. Also how fast do you think farms produce food that you can drip feed it to employees constantly
> solely to take advantage of its free CI, while that lasts
Eh, if you want to be able to continue working, deploy and what not as normal during weekdays, I'd suggest also moving to Forgejo Actions if you're moving anyways. Not 100% compatible, but more or less the same, and even paying the same but with dedicated hardware you'd get way faster runners.
For companies with resources for infrastructure, sure.
For OSS, the unlimited free minutes of multiplatform CI offered by GitHub are literally impossible to replace. Maintaining runners yourself to do the same things would be somewhere between a part- and full-time job.
"Codeberg is a non-profit, community-led effort that provides services to free and open-source projects, such as Git hosting (using Forgejo), Pages, CI/CD and a Weblate instance."
Never say impossible.
Github is still "new" to a lot of us. OSS existed well before it, and will continue to exist well after.
If Codeberg starts offering Mac and Windows runners alongside their Linux ones for free (or at an achievable price point) for a modest OSS project I'll certainly look at it very closely. If all I needed was a Linux runner, I'd probably be on there already.
And yes, if we make OSS just about hosting the code, things are much simpler. If you're a piece of desktop software though, and you have users, they'll typically (and reasonably) want auditable signed binaries on all the platforms you support, which requires multiplatform CI.
I'm not sure I'd put it at full slop, but the book is clearly AI assisted. You can view samples on retailer websites, and it features 'interjections' from an AI avatar. Further, in the opening pages, the author feels it pertinent to point out that every fact in the book is verified. Not my cup of tea.
I mean, to those who played them, 'custom map' is basically just a term of art indicating the things you said. In the parlance of the mid-2000s WC3 scene, you would call them custom games or custom maps.
Or, if you were slightly older, you might call them UMS, as they were in Starcraft. Short for "Use Map Settings", indicating that the game logic should come from the scripts and triggers in the map file rather than the built-in logic for ladder games.
> “We work in tech,” she says. “Our kids [aren’t] getting any cell phones, no smartphones, no Instagrams. I write the algorithms. I don’t want my kids to touch those algorithms.”
It's disgusting that this has become a casual attitude and admission by the tech worker class. No one should be getting this free pass.
"I am actively harming children and society with my livelihood (except my own, because I am so smart). Here I am proudly and smugly stating this in a news article."
I disagree- I think it's not much different than working at a distillery or cigar company (wrappery?). Social media is a vice not very different than whiskey or cigars- they're addictive, feel good in the short term, and are problematic to have too much or to do habitually. But we still let people indulge in them because they're fine in small quantities for responsible adults, and we expect that parents will not let their kids have access to them.
The only differences as far as I can see are in buying- a child could technically buy a phone for themself if they had the money and create an account on Instagram for free, and in cultural recognition of social media as a vice, which I believe is starting to change.
The overall point is, the law should assume that adults are reasonably intelligent and responsible people, and that parents should be the ones responsible for parenting their own children their own way.
> The overall point is, the law should assume that adults are reasonably intelligent and responsible people,
Over cigarettes and alcohol. The most inconsequential stuff.
But don’t say the words “direct democracy”.[1] Then people being reasonably intelligent and responsible gets forgotten. By the hive mind at least.
But people should be assumed to be reasonably intelligent and responsible. If that narrative allows us to make money off them. Not when it comes to democracy and political autonomy, of course. Shudders.
Where’s the option for people who are weak willed when it comes to something? Can they ban themselves from buying these goods? If not, where are the heroes that are working on that?
> and that parents should be the ones responsible for parenting their own children their own way.
There are whole studies of psychology weaponized against children to make them act as consumer proxies for their parents. To optimize nagging.
But every pair of parent for themselves. Against all of marketing. “Responsibility.” Because that makes money.
Is a panel of 'neutral' experts even possible to field in this area? I feel like anyone with sufficiently in depth knowledge of both the AV1 and HEVC specs has almost certainly derived a big paycheck for years from stakeholders on one side or the other of these lawsuits.
I'm no expert, but Google having designed AV1, I can certainly imagine a world where the codec infringes upon HEVC just enough that the lawsuit fees would come out in the wash.
You are right about the danger of non-neutral experts, but there still is an essential difference between a group of experts and a jury.
The experts may be biased, but when they open the mouth and try to argue their position their bias becomes obvious for the other experts and it can be contradicted with logical arguments.
Unless all the experts work for an interested party, it would be very difficult to impose an incorrect verdict, because it is impossible to argue in its favor without the mistakes in the argument being immediately exposed by an interlocutor.
On the other hand, with a standard jury most people will be unable to see what is wrong in the arguments presented to them and they will not be able to distinguish truth from lies in such technical subjects.
The US elections and the elections in many other countries are an eloquent proof of the capacity of average people for distinguishing truth from lies concerning much simpler facts than the details of video compression patents. Expecting a jury to choose the right verdict in such a trial seems too optimistic.
This looks like slop? The README is full of emojis and kind of incoherent, there are no implementation details, there claims to be a Metal backend that doesn't seem to exist, etc.
This is definitely not worth using. It doesn't even say what hypervisor its using. Is it using QEMU? Docker? Podman? Lima? Colima?
And also this chart is super weird:
Solution Latency HiDPI Native Integration Setup Complexity
Cocoa-Way Low Yes Native windows Easy
XQuartz High Partial X11 quirks Medium
VNC High No Full screen Medium
VM GUI High Partial Separate window Complex
A standard VM will always be the easiest to set up by far. And latency should be the same across all 4. I mean after all it's a VM running on your local machine. Honestly I don't even know what it means when it says "Latency".
I also looked at some of the code and it's using OpenGL 3.3 Core which is... super old. But it makes sense in the context of this being LLM-generated since most of its training data is probably OpenGL 3.3 Core code....
Overall this project is very strange. It makes me feel more confident in my skills, AI isn't all that great. It's all hype. You can get to the frontpage of HN. And if you're
Peter Steinberger you can get acquired by OpenAI for a billion dollars. But that's about it. The code isn't getting any better.
This reminds me of that C-compiler-in-Rust publicity stunt by Anthropic. There's no substance. It's just a headline.
Gotcha thanks for that info. Yeah that's insane. You have to read the description of a YouTube video to understand what a project on Github is doing. There is no architecture here.
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
reply