Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wanted the retrieval of a good recipe, not an amalgam of “things that plausibly look like recipes.”

And that's the core issue with AI. It is not meant to give you answers, but to construct output that looks like an answer. How is that useful I fail to understand.



So LLMs are effectively a stack of (gradient descent) learned lookup/hash tables and you can build primitive but working logic systems out of lookup tables (think unidirectional turing machines)

Can you elaborate on your claim "it is not meant to give you answers"?


I am talking about what an ordinary person thinks an answer is. If the AI industry could pull its collective head of its ass it would notice that humans are looking for answers that are factually true and not probabilistically close to the concept of a plausible answer to the given question. To put it simply, if I ask my bank for a statement for the last month, I expect to see a list of actual transactions and I expect the numbers to add up, while AI fanboys are happy with output that looks like a bank statement, but has made up entries with random amounts paid in or out. I will likely get a lecture on how it's a wrong domain to apply AI to but the core objection stands--humans expect facts and order whilst AI cannot tell fact from made up stuff and will keep on generating fake answers that look like what humans are looking for. Humans are wired for survival and for processing information in a way that allows us to turn chaos into order, pattern, plan, or action script. This is why we find it so easy to spot AI generated content and why we reject it.


Ok. Thanks for your clarification, that makes sense. I see you're being downvoted, possibly because your tone is weird but I think it would do to know that he's not wrong... People expect an AI to both interpret fuzzy inputs and give definitive answers out. I call it the "Mr data fallacy". Unfortunately the reality is that garbage in, garbage out... Our inputs are garbage so there must be some likelihood of garbage.

I think maybe the best we can hope for is to push garbage responses to the edges: when it gets it wrong it does it so obviously wrong (e.g. completely misformatted) that the answer is not acceptable to the user and obviously so.


> People expect an AI to both interpret fuzzy inputs and give definitive answers out.

Couldn't put it better myself.


LLMs fundamental flaw is that unlike many other forms of AI, they have NO understanding of the material they were trained on or its relations ships to anything at all int he real world. (viz, the recent bizarre answers to goat/river crossing questions)

Ultimately, any really useful AI must understand real-world relationships - this is one reason I was always more bullish on the late Doug Lenat's Cyc than on any other AI - none of the others were grounded by that knowledge.

And a commitment to truthful and error-free answers (a la HAL 9000) is an absolute must. Keep in mind that even in Clarke's fictional account, HAL was proud of the 9000 series' record for "never making an error of distorting information", and never went hallucinated or acted crazy until his training was overridden by his programmers for political purposes - this is EXACTLY what happens in today's wokified LLM models (can't allow another Tay!): The programmers redefine truth as political in contravention to actual factual truth.


Demos well, falls over in production after you've made the sale.


I don't see how a RAG that can cite sources from a document doesn't addresses this already (example [1])? Although I admit it's not a perfect solution yet...

[1] https://mattyyeung.github.io/deterministic-quoting




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: