Hacker News new | past | comments | ask | show | jobs | submit login

That's true too, but the bigger difference from my point of view is that factual errors in Wikipedia are relatively uncommon, while, in the LLM output I've been able to generate, factual errors vastly outnumber correct facts. LLMs are fantastic at creativity and language translation but terrible at saying true things instead of false things.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: