> They were obviously not trying to make a sweeping comment about the entire future of the field
OP said “AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.”
I'm not going to patronize you by explaining why this is not "very precise", or why its lack of temporal caveats is an issue, as I've already done so in an earlier comment. If you're still confused, you should read the sentence a few times until you understand. OP did not even mention which specific model they tested, and did not provide any specific prompt example.
> Are you using ChatGPT to write your loquacious replies?
If you can't handle a few short paragraphs as a reply, or find it unworthy of your time, you are free to stop arguing. The Hacker News guidelines actually encourage substantive responses.
I also assume that in the future, accusing a user of using ChatGPT will be against site guidelines, so you may as well start phasing that out of your repertoire now.
Here are some highlights from the Hacker News guidelines regarding comments:
- Don't be snarky
- Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
- Assume good faith
- Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken.
This is a lot of words, but does any of it contradict this:
> AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.”
Are you saying that AI does have an inherent idea of what it's doing or is doing more than that? Today?
We're in an informal discussion forum. I don't think the bar we're looking for is some rigorous deductive proof. The above matches my experience as well. Its a handy applied interactive version of an Internet search.
If someone has a different experience that would be interesting. But this just seems like navel-gazing over semantics.
> Are you saying that AI does have an inherent idea of what it's doing or is doing more than that?
No. I stated that OP cannot make that kind of blanket, non-temporally constrained statements about artificial intelligence.
> We're in an informal discussion forum. I don't think the bar we're looking for is some rigorous deductive proof
We're in a technology-oriented discussion forum, the minimum bar to any claim should be that it is supported by evidence, otherwise it should be presented as what it is: opinion.
> this just seems like navel-gazing over semantics.
In my opinion, conversation is much easier when we can agree that words should mean something. Imprecise language matched with an authoritative tone can mislead an audience. This topic in particular is rife with imprecise and uninformed arguments, and so we should take more care to use our words correctly, not less.
Furthermore, my argument goes beyond semantics, as it also deals with the importance of constraints when making broad, unbacked claims.
They were obviously not trying to make a sweeping comment about the entire future of the field.
Are you using ChatGPT to write your loquacious replies?