Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's true that LLMs aren't trained on strings of random words, so in a sense you correct are that they have some "ground truth." They wouldn't generate anything logical at all if not. Does that even need to be stated though? You don't need AI to generate random words.

The more important point is, they aren't trained on only factual (or statistically certain) statements. That's the ground truth that's missing. It's easy to feed an LLM a bunch of text scraped from the internet. It's much harder to teach it how to separate fact from fiction. Even the best human minds that live or ever have lived have not been able to do that flawlessly. We've created machines that have a larger amount of memory than any human, much quicker recall, the ability to converse with vast numbers of people at once, but it performs at about par with humans in discerning fact from fiction.

That's my biggest concern about creating super powered artificial intelligence. It's super powers are only super in a couple dimensions and people mistake that for general intelligence. I came across someone online that really believed chatGPT was creating a custom diet plan tailored to their specific health needs, base on a few prompts. That is scary!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: