Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think what you'd find in a real lawyer's office is that they don't "remember" or "generate" very many references at all. Those books in the stereotypical image of a lawyer's office weren't there just there to look lawyerly, and in modern times those computers are equipped with some expensive subscriptions to legal databases. This is not how legal references are created, not now, not in the past. Depending on an LLM (not "AI", LLM specfically) to do this is insane, stupid, and anyone selling an LLM-based product for this purpose ought to be sued for fraud... and that only coincidentally since they are providing a product to lawyers.

Why is anyone even defending this? LLMs are objectively unsuited for this. LLMs are objectively failing at this. LLM architecture is obviously not a good choice for this task to anyone with even a basic understanding of them. But... who cares? All this means is that LLMs are not suited for this task. I wouldn't be banging on this except for the engineers running around here with stars in their eyes thinking that despite all the evidence they've finally found the tool that does everything. No. We haven't. They don't. This isn't an attack on the viability of AI as a whole or LLMs specifically, any more than pointing out that a hammer is not a good tool for cutting grass is an attack on hammers.



I think many people forgot what mastery entails. It's all about being reliable to reproduce a process or solving a problem. Sometimes the process is deterministic and we engineer a tool for it. Or it's not and we need to approach it the old way by learning and training.

Providing truthful information is one of the latter and one of the flaws is our memory. Writing it down or recording it correct this and the next problem was retrieving the snippet we need. That's one of the things lawyer does. They don't invent laws out of thin air, they're just better at finding the helpful bits. Even today, I spend a lot of time looking at libraries and languages documentation to verify a hunch I have. But I still have to go through the learning phase to know what to search for.

In Dreyfus Model of skill acquisition, we have these four qualities:

    Recollection (non-situational or situational)
    Recognition (decomposed or holistic)
    Decision (analytical or intuitive)
    Awareness (monitoring or absorbed)
The novice is described by the left values and the expert by the right values for each attribute. For the novice, every part is costly. For the expert, mastery has reduced the cost for taking actions while still having the possibility to do a more deliberate approach when the chance of error is too high. Novice are hoping LLMs can reproduce mastery, expert knows that LLMs inherent capacity of making errors make it a liability.

We need better tools, not probability machine.


Thanks for the in depth, passionate followup! Right off the bat I want to clarify that I was talking about human cognition, not just typical attorney work — I’d stand by the assertion that it’s hallucination all the way down, at the very least “hallucinating a symbolic representation of the book passage you read 2s ago”.

Re: LLMs and law, I agree with all your complaints 100% if we constrain the discussion to direct/simplistic/“chatbot”-esque systems. But that’s simply not what the frontier is. LLMs are a ground breaking technique for building intuitive components within a much larger computational system that looks like existing complex software. We’re not excited about (only) crazy groundbreaking products, we’re excited about enhancing existing products with intuitive features.

To briefly touch on your very strong beliefs about LLM models being a bad architecture for legal tasks: I couldn’t disagree more. LLMs specialize in linguistic structures, somewhat tautologically. What’s not linguistic about individual atomic tasks like “review this document for relevant passages” or “synthesize these citations and facts into X format”? Lawyers are smart and do lots of deliberation, sure, but that doesn’t mean they’re above the use of intuition.

As far as we’re in an argument of some kind, my closing argument is that people as a whole can be pretty smart, and there’s a HUGE wave of money going into the AI race all of a sudden. Like, dwarfing the “Silicon Valley era” altogether. What are the chances that you’re seeing the super obvious problem that they’re all missing? Remember that this isn’t just stock price speculation, this is committed investments of huge sums of capital into this specific industry.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: