Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Coderabbit is an LLM code review company so their incentives are the opposite. AI is terrible and you need more AI to review it.

fwiw, I agree. LLM-powered code review is a lifesaver. I don't use Coderabbit but all of my PRs go through Copilot before another human looks at it. It's almost always right.





You're comment history suggests a pro-AI bias on par with AI companies. I don't understand it. It seems like critical thinking, nuance, and just basic caution have been turned off like a light-switch for far too many people.

> It seems like critical thinking, nuance, and just basic caution have been turned off like a light-switch for far too many people.

Ironically, this response contains no critical thinking or nuance.


Such a typical HN "gotcha!".

They're not wrong. I think many people also saw/see the trajectory of the models.

If you were pro-ai doing the majority of coding a year ago, you would have been optimistically in front of where the tech was actually capable.

If you are strongly against AI doing the majority of coding now, you are likely well behind what the current tech is capable of.

People who were pragmatic and knowledgeable anticipated this rise in capability.


I recommend engaging with ideas next time, rather than making reductive, ad-hominem, thought-terminating statements.

Thanks! I recommend not reading all comments literally. We have a significant hype bubble atm and I'm not exactly alone in thinking how crazy it is. I think you can draw a connection from my exasperated statement to that if you really wanted to.

You intentionally made disparaging remarks about someone and attempted to tie them having an opinion about a technology to that of people who have a vested financial interest in said technology.

You didn't engage at all on the substance of their comment - that they find AI useful for doing code reviews - and instead made a comment that was nothing but condescension.

All of that is separate from whether or not AI is overhyped or anything else - it being valuable for PRs could be true while it is also overhyped. If true, that could be some of the nuance you seem to be so concerned about us lacking.


1. No, I haven't suggested financial interest. There are plenty of non-financial ones on this forum.

2. True, I challenged the person's bias considering extraordinary historical comments lacking extraordinary evidence.


Our industry never exhibited an abundance of caution, but if you have trouble understanding the value of AI here, consider that you are akin to an assembly language programmer in the 1970s or 80s who couldn't understand why people are so gung-ho about these compilers that just output worse code than they could write by hand. In retrospect, compilers only got better and better, and familiarity with programming languages and compilation toolchains became a valuable productivity skill and the market for assembly language programming either stagnated, or shrank.

Doesn't it seem plausible to you that, whatever the ratio of bugs in AI-generated code today, that bug count is only going to really go down? Doesn't it then seem reasonable to say that programmers should start familiarizing themselves with these new tools, where the pitfalls are and how to avoid them?


> compilers only got better and better

At no point compilers produced stochastic output. The intent user expressed was translated down with a much much higher fidelity, repeatability and explainability. Most important of all, it completely removed the need for the developer to meddle with that output. If anything it became a verification tool for the developer‘s own input.

If LLMs are that good, I dare you skip the programming language and have it code in machine directly next time. And it is exactly how it is going to feel like if we treat them as valuable as compilers.


> At no point compilers produced stochastic output. [...] Most important of all, it completely removed the need for the developer to meddle with that output.

Yes, once the optimizations became sophisticated enough and reliable enough that people no longer needed to think about it or go down to assembly to get the performance they needed. Do you get the analogy now?


I don't know why you'd think your analogy wasn't clear in the first place. But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.

If you have any first principles thinking on why this is more likely than not, I am all ears. My epistemic bet is that it is not going to happen, or somehow if we end up there the language we will have to use to instruct them is not going to be different than any other high level programming language that the point will be moot.


> But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.

Where did I make that assertion?


Here is where I got that impression:

> once the optimizations became sophisticated enough

Either way I am not trying to litigate here. Feel free to correct me if your position was softer.


No because programmers aren't the ones pushing the wares, it's business magnates and sales people. The two core groups software developers should never trust.

Maybe if this LLM craze was being pushed by democratic groups where citizens are allowed to state their objections to such system, where such objections are taken seriously, but what we currently have are business magnates that just want to get richer with no democratic controls.


> No because programmers aren't the ones pushing the wares, it's business magnates and sales people.

This is not correct, plenty of programmers are seeing value in these systems and use them regularly. I'm not really sure what's undemocratic about what's going on, but that seems beside the point, we're presumably mostly programmers here talking about the technical merits and downsides of an emerging tech.


This seems like an overly reductive worldview. Do you really think there isn't genuine interest in LLM tools among developers? I absolutely agree there are people pushing AI in places where it is unneeded, but I have not found software development to be one of those areas. There are lots of people experimenting and hacking with LLMs because of genuine interest and perceived value.

At my company, there is absolutely no mandate for use of AI tooling, but we have a very large number of engineers who are using AI tools enthusiastically simply because they want to. In my anecdotal experience those who do tend to be much better engineers than the ones who are most skeptical or anti-AI (though its very hard to separate how much of this is the AI tooling, and how much is that naturally curious engineers looking for new ways to improve inevitably become better engineers who don't).

The broader point is, I think you are limiting yourself when you immediately reduce AI to snake oil being sold by "business magnates". There is surely a lot of hype that will die out eventually, but there is also a lot of potential there that you guarantee you will miss out on when you dismiss it out of hand.


I use AI every day and run my own local models, that has nothing to do with seeing sales people acting like sales people or conmen being con artists.

Also add in the fact that big tech has been extremely damaging to western society for the last 20 years, there's really little reason to trust them. Especially since we see how they treat those with different opinions than them (trying to force them out of power, ostracize them publicly, or in some cases straight up poisoning people + giving them cancer).

Not really hard to see how people can be against such actions? Well buckle up bro, come post 2028 expect a massive crackdown and regulations against big tech. It's been boiling for quite a while and there's trillions of dollars to plunder for the public's benefit.


If I have a horse and plow and you show up with a tractor, I will no doubt get a tractor asap. But if you show up with novel amphetamines for you and your horse and scream "Look how productive I am! We'll figure out the long-term downsides, don't you worry! Just more amphetamines probably!", I'm happy to be a late adopter.

A tractor based on a Model T wouldn't have been very compelling either at the time. Not many horse-drawn plows these days though.

I understand that you've convinced yourself that progress is inevitable. I'll ponder over it on my commute to Mars. Oh wait, that was still on the tele.

High-level languages were absolutely indispensable at a time when every hardware vendor had its own bespoke instruction set.

If you only ever target one platform, you might as well do it in assembly, it's just unfashionable. I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.


> I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.

I'm skeptical, but do you think that you'd see no productivity gains for Python, Java or Haskell?


Those are garbage-collected environments. I have some experience with a garbage-collected 'assembly' (.NET CIL). It is a delight to read and write compared to most C code.

Agree to disagree then! I've done plenty of CIL reading and writing. It's fine, but not what I'd call pleasant, not even compared to C.

Type checking, even that as trivial as C's, is a boon to productivity, especially on large teams but also when coding solo if you have anything else in your brain.

compilers aren't probabilistic models though

True. The question is whether that's relevant to the trajectory described or not.

Successful compiler optimizations are probabilistic though, from the programmer's point of view. LLMs are internally deterministic too.

What? Do you even know how compilers work?

Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed? They are applied deterministically inside a compiler, but often based on heuristics, and the complex interplay of optimizations in complex programs means that sometimes they will not do what you expect them to do. Sometimes they work better than expected, and sometimes worse. Sounds familiar...

> Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed?

Yes, because:

> They are applied deterministically inside a compiler

Sorry, but an LLM randomly generating the next token isn't even comparable.

Deterministic complexity =/= randomness.


> Yes, because:

Unless you wrote the compiler, you are 100% full of it. Even as the compiler writer you'd be wrong sometimes.

> Deterministic complexity =/= randomness.

LLMs are also deterministically complex, not random.


> Unless you wrote the compiler, you are 100% full of it. Even then you'd be wrong sometimes

You can check the source code? What's hard to understand? If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.

Sure maybe you're limited by your personal knowledge on the compiler chain, but again complexity =/= randomness.

For the same source code, and compiler version (+ flags) you get the exact same output every time. The same cannot be said of LLMs, because they use randomness (temperature).

> LLMs are also deterministically complex, not random

What exactly is the temperature setting in your LLM doing then? If you'd like to argue pseudorandom generators our computers are using aren't random - fine, I agree. But for all practical purposes they're random, especially when you don't control the seed.


> If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.

Right, so you agree that optimization outputs not fully predictable in complex programs, and what you're actually objecting to is that LLMs aren't like compiler optimizations in the specific ways you care about, and somehow this is supposed to invalidate my argument that they are alike in the specific ways that I outlined.

I'm not interested in litigating the minutiae of this point, programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs. The outputs are generally reliable according to certain criteria, but unpredictable.

LLM models are also typically probabilistic black boxes. The outputs are also unpredictable, but also somewhat reliable according to certain criteria that you can learn through use. Where the unreliability is problematic you can often make up for their pitfalls. The need for this is dropping year over year, just as the need for assembly programming to eke out performance dropped year over year of compiler development. Whether LLMs will become as reliable as compiler optimizations remains to be seen.


> invalidate my argument that they are alike in the specific ways that I outlined

Basketballs and apples are both round, so they're the same thing right? I could eat a basketball and I can make a layup with an apple, so what's the difference?

> programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs

In reality this is at best the bottom 20% of programmers.

No programmer I've ever talked to has described compilers as probabilistic black boxes - and I'm sorry if your circle does. Unfortunately there's no use of probability and all modern compilers definitionally white boxes (open source).


My operating assumption, for everyone acting the way you described, is that it's predicated on the belief of "I have an opportunity to make money from this." It is exceedingly rare to find an instance of someone using the tech purely for the love of the game who isn't also tying it back to income generation in some way.

I use it as an accelerated search engine to learn about things quicker than I otherwise would. But that's it. I ask it a question, it tells me an answer, and I work from there myself. Slapping it into your editor to write the code for you sounds disastrous to me. And also incredibly boring.

it's called a love of money

Their incentives are perfectly aligned - you’re making more bugs, surely you need some AI code review to help prevent that.

It’s literally right at the end of their recommendations list in the article


The original comment said:

> an article that claims AI is oddly not as bad when it comes to generating gobbledegook

Ironically, Coderabbit wants you to believe AI is worse at generating gobbledegook.


Make the gobbledygook from your gobbledygook generator better with our proprietary gobbledygook generator.

I'm obviously taking the piss here, but the irony is amusing.


It sounds stupid but it works. I've seen it. I put Copilot on AI-generated slop PRs and hit refresh until it stops commenting. It's great seeing it take out all the dead code.

Do you use Copilot for coding and then also Copilot for reviewing? Or are you using some other coding agent and Copilot only for PR reviews?

I do not use Copilot for coding. I use other assistants now.

Copilot code review is amazing. I use it all the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: