Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Torvalds is cautiously optimistic, hopes that AI will be able to spot bugs in the code. That kind of AI I did not see yet. All the concerning issues from the Gentoo post I can see on a regular basis. For example, plausibly looking BS or AI spam. It is all delivered to my doorstep, so to say. That is the issue.


ChatGPT3.5 already spots bugs, e.g. when I swap the order of conditions in fizzbuzz. An error that a human could make. We've been at the point where AI can help spot bugs for a while already.

AI can be used poorly, AI can be used well.


Another problem is how we arrive to a model that can be used poorly or well. There's huge copyright and ethical problems underneath every big model, and I refuse to use models which are trained with copyrighted materials, without consent.

Gentoo is right here, and until we pass these hurdles, I don't use any of these systems, even with a 100 feet pole.


Ok, now ask it how many Ms are in ammunition. Just because it can do somethings some of the time doesn't mean we'd happily accept contributions from it.


1) It does not need to solve every issue to be useful; it just needs to surface some issue that a human reviewer can then validate. It's seen a lot of code; it can find common issues.

2) The specific issue you're talking about is because they don't see letters, they see tokens, which are groups of letters / subwords. It can't count those because it can't actually "see" what it's counting. This is being worked on as well.


Someone committing poor quality LLM generated code and deeming it appropriate for review could create equally bad, if not worse, handwritten code. By extension, anyone who merges poor quality LLM code could merge equally poorly handwritten code. So ultimately it's up to their judgement and about the trust in the contribution process. If poor quality code ended up in the product, then it's the process that failed. Just because someone can hit you with a stick doesn't mean we should cut down the trees — we should educate people to stop hitting others with sticks instead.

"Banning LLM content" is in my opinion an effort spent on the wrong thing. If you want to ensure the quality of the code, you should focus on ensuring the code review and merge process is more thorough in filtering out subpar contributions effectively, instead of wasting time on trying to enforce unenforceable policies. They only give a false sense of trust and security. Would "[x] I solemnly swear I didn't use AI" checkbox give anything more than a false sense of security? Cheaters gonna cheat, and trusting them would be naive, politely said...

Spam... yeah, that is a valid concern, but it's also something that should be solved on organizational level.


Cheaters are gonna cheat, but filtering out the honest/shameless LLM fans is still an improvement. And once you do find out that they lied, you now have a good reason to ban them. Win/win.


Well, Torvalds says in the interview ‘we already have tools such as linters and compilers which speed up the work we do as part of software development’

I get the impression he agrees this road to LLM content is inevitable, but also kind of emphasises the role of the reviewer who takes the final decision.


I have llm very patiently explain to me why I crashed prod when I used the wrong conversion factor between ms and mus and us. Thanks SI very cool that one of the more often used units needs unicode to be entered into code.

Llm are absolutely helping with catching buts and code quality already.


I had LLM patiently show me use after free bugs in non-existent Asterisk C code it just made up. :D


Obviously that's your fault for not having the code it found the bugs in. Why are you attacking progress?


> Why are you attacking progress?

Progress to where? One should not use "progress" as an unqualified noun to denote a scalar. Progress is a vector, with both magnitude and direction. The direction part is really important.


:DDD




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: