You're comment history suggests a pro-AI bias on par with AI companies. I don't understand it. It seems like critical thinking, nuance, and just basic caution have been turned off like a light-switch for far too many people.
Thanks! I recommend not reading all comments literally. We have a significant hype bubble atm and I'm not exactly alone in thinking how crazy it is. I think you can draw a connection from my exasperated statement to that if you really wanted to.
You intentionally made disparaging remarks about someone and attempted to tie them having an opinion about a technology to that of people who have a vested financial interest in said technology.
You didn't engage at all on the substance of their comment - that they find AI useful for doing code reviews - and instead made a comment that was nothing but condescension.
All of that is separate from whether or not AI is overhyped or anything else - it being valuable for PRs could be true while it is also overhyped. If true, that could be some of the nuance you seem to be so concerned about us lacking.
Our industry never exhibited an abundance of caution, but if you have trouble understanding the value of AI here, consider that you are akin to an assembly language programmer in the 1970s or 80s who couldn't understand why people are so gung-ho about these compilers that just output worse code than they could write by hand. In retrospect, compilers only got better and better, and familiarity with programming languages and compilation toolchains became a valuable productivity skill and the market for assembly language programming either stagnated, or shrank.
Doesn't it seem plausible to you that, whatever the ratio of bugs in AI-generated code today, that bug count is only going to really go down? Doesn't it then seem reasonable to say that programmers should start familiarizing themselves with these new tools, where the pitfalls are and how to avoid them?
At no point compilers produced stochastic output. The intent user expressed was translated down with a much much higher fidelity, repeatability and explainability. Most important of all, it completely removed the need for the developer to meddle with that output. If anything it became a verification tool for the developer‘s own input.
If LLMs are that good, I dare you skip the programming language and have it code in machine directly next time. And it is exactly how it is going to feel like if we treat them as valuable as compilers.
> At no point compilers produced stochastic output. [...] Most important of all, it completely removed the need for the developer to meddle with that output.
Yes, once the optimizations became sophisticated enough and reliable enough that people no longer needed to think about it or go down to assembly to get the performance they needed. Do you get the analogy now?
I don't know why you'd think your analogy wasn't clear in the first place. But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.
If you have any first principles thinking on why this is more likely than not, I am all ears. My epistemic bet is that it is not going to happen, or somehow if we end up there the language we will have to use to instruct them is not going to be different than any other high level programming language that the point will be moot.
> But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.
No because programmers aren't the ones pushing the wares, it's business magnates and sales people. The two core groups software developers should never trust.
Maybe if this LLM craze was being pushed by democratic groups where citizens are allowed to state their objections to such system, where such objections are taken seriously, but what we currently have are business magnates that just want to get richer with no democratic controls.
> No because programmers aren't the ones pushing the wares, it's business magnates and sales people.
This is not correct, plenty of programmers are seeing value in these systems and use them regularly. I'm not really sure what's undemocratic about what's going on, but that seems beside the point, we're presumably mostly programmers here talking about the technical merits and downsides of an emerging tech.
This seems like an overly reductive worldview. Do you really think there isn't genuine interest in LLM tools among developers? I absolutely agree there are people pushing AI in places where it is unneeded, but I have not found software development to be one of those areas. There are lots of people experimenting and hacking with LLMs because of genuine interest and perceived value.
At my company, there is absolutely no mandate for use of AI tooling, but we have a very large number of engineers who are using AI tools enthusiastically simply because they want to. In my anecdotal experience those who do tend to be much better engineers than the ones who are most skeptical or anti-AI (though its very hard to separate how much of this is the AI tooling, and how much is that naturally curious engineers looking for new ways to improve inevitably become better engineers who don't).
The broader point is, I think you are limiting yourself when you immediately reduce AI to snake oil being sold by "business magnates". There is surely a lot of hype that will die out eventually, but there is also a lot of potential there that you guarantee you will miss out on when you dismiss it out of hand.
I use AI every day and run my own local models, that has nothing to do with seeing sales people acting like sales people or conmen being con artists.
Also add in the fact that big tech has been extremely damaging to western society for the last 20 years, there's really little reason to trust them. Especially since we see how they treat those with different opinions than them (trying to force them out of power, ostracize them publicly, or in some cases straight up poisoning people + giving them cancer).
Not really hard to see how people can be against such actions? Well buckle up bro, come post 2028 expect a massive crackdown and regulations against big tech. It's been boiling for quite a while and there's trillions of dollars to plunder for the public's benefit.
If I have a horse and plow and you show up with a tractor, I will no doubt get a tractor asap. But if you show up with novel amphetamines for you and your horse and scream "Look how productive I am! We'll figure out the long-term downsides, don't you worry! Just more amphetamines probably!", I'm happy to be a late adopter.
I understand that you've convinced yourself that progress is inevitable. I'll ponder over it on my commute to Mars. Oh wait, that was still on the tele.
High-level languages were absolutely indispensable at a time when every hardware vendor had its own bespoke instruction set.
If you only ever target one platform, you might as well do it in assembly, it's just unfashionable. I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.
Those are garbage-collected environments. I have some experience with a garbage-collected 'assembly' (.NET CIL). It is a delight to read and write compared to most C code.
Type checking, even that as trivial as C's, is a boon to productivity, especially on large teams but also when coding solo if you have anything else in your brain.
Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed? They are applied deterministically inside a compiler, but often based on heuristics, and the complex interplay of optimizations in complex programs means that sometimes they will not do what you expect them to do. Sometimes they work better than expected, and sometimes worse. Sounds familiar...
> Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed?
Yes, because:
> They are applied deterministically inside a compiler
Sorry, but an LLM randomly generating the next token isn't even comparable.
> Unless you wrote the compiler, you are 100% full of it. Even then you'd be wrong sometimes
You can check the source code? What's hard to understand? If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.
Sure maybe you're limited by your personal knowledge on the compiler chain, but again complexity =/= randomness.
For the same source code, and compiler version (+ flags) you get the exact same output every time. The same cannot be said of LLMs, because they use randomness (temperature).
> LLMs are also deterministically complex, not random
What exactly is the temperature setting in your LLM doing then? If you'd like to argue pseudorandom generators our computers are using aren't random - fine, I agree. But for all practical purposes they're random, especially when you don't control the seed.
> If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.
Right, so you agree that optimization outputs not fully predictable in complex programs, and what you're actually objecting to is that LLMs aren't like compiler optimizations in the specific ways you care about, and somehow this is supposed to invalidate my argument that they are alike in the specific ways that I outlined.
I'm not interested in litigating the minutiae of this point, programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs. The outputs are generally reliable according to certain criteria, but unpredictable.
LLM models are also typically probabilistic black boxes. The outputs are also unpredictable, but also somewhat reliable according to certain criteria that you can learn through use. Where the unreliability is problematic you can often make up for their pitfalls. The need for this is dropping year over year, just as the need for assembly programming to eke out performance dropped year over year of compiler development. Whether LLMs will become as reliable as compiler optimizations remains to be seen.
> invalidate my argument that they are alike in the specific ways that I outlined
Basketballs and apples are both round, so they're the same thing right? I could eat a basketball and I can make a layup with an apple, so what's the difference?
> programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs
In reality this is at best the bottom 20% of programmers.
No programmer I've ever talked to has described compilers as probabilistic black boxes - and I'm sorry if your circle does. Unfortunately there's no use of probability and all modern compilers definitionally white boxes (open source).
My operating assumption, for everyone acting the way you described, is that it's predicated on the belief of "I have an opportunity to make money from this." It is exceedingly rare to find an instance of someone using the tech purely for the love of the game who isn't also tying it back to income generation in some way.
I use it as an accelerated search engine to learn about things quicker than I otherwise would. But that's it. I ask it a question, it tells me an answer, and I work from there myself. Slapping it into your editor to write the code for you sounds disastrous to me. And also incredibly boring.
The most complex thing to support is peoples' resumes. If carpenters were incentivized like software devs are, we'd quickly start seeing multi-story garden sheds in reinforced concrete because every carpenters dream job at Bunkers Inc. pays 10x more.
Like many things in dev it sounds sooo good on the surface, but is a minefield in practice (Brandolini's law + The Iron Law of Bureaucracy for starters).
I'd only advocate it in a very carefully curated team.
“like it's some sort of established fact” -> “My understanding”?! a.k.a pure speculation. Some of you AI fans really need to read your posts out loud before posting them.
Isn't that still "acqui-hiring" according to common usage of the term?
Sometimes people use the term to mean that the buyer only wants some/all of the employees and will abandon or shut down the acquired company's product, which presumably isn't the case here.
But more often I see "acqui-hire" used to refer to any acquisition where the expertise of the acquired company are the main reason to the acquisition (rather than, say, an existing revenue stream), and the buyer intends to keep the existing team dynamics.
Acquihiring usually means that the product the team are working on will be ended and the team members will be set to work on other aspects of the existing company.
That is part of the definition given in the first paragraph of the Wikipedia article, but I think it’s a blurry line when the acquired company is essentially synonymous with a single open source project and the buyer wants the team of experts to continue developing that open source project.
The team is continuing to develop the open source project that was synonymous with the company, but they're explicitly no longer going to try to monetize it. I think that squarely counts as an acquihire according to common usage.
I've seen a few of these seemingly random acquisitions lately, and I congratulate the companies and individuals that are acquired during this gold rush, but it definitely feels awkwardly artificial.
That also means a much larger team and great possibilities for good perf reviews, so basically an excellent outcome in a corporate env. People follow incentives.
reply