Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But it is easy to see a near future where an AI agent just retrieves and summarizes the case for you. And does a much better job too.

I am significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why - and what details - of the decision matter, and what impact they will have. They can probably do an OK job summarizing the document, but not much more.

I do agree that given current trends there is going to be significant impact to journalism, and I don’t like that future at all. Particularly because we won’t just have less good reporting, but we won’t have any investigative journalism, which is funded by the ads from relatively cheap “reporting only” stories. There’s a reason we call the press the fourth estate, and we will be much poorer without them.

There’s an argument to be made that the press has recently put themselves into this position and hasn’t done a great job, but I still think it’s going to be a rather great loss.



> significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why

You should play with LLMs this week.


It’ll just link to some random unrelated court ruling.


If you think that's the case, you should really give current LLMs another shot. The version of ChatGPT from 3 years ago has more in common with the average chatbot from 50 years ago than it does the ChatGPT from today.


I literally work in the space dude.


What condescending nonsense is this? I use all the major LLM systems, mostly with their most expensive models, and when I ask them for sources, including specifically in many cases sources for legal questions, half the time the linked source will not be remotely irrelevant, and will not remotely substantiate the claim that it is being cited for. Almost never is it without an error of some significance. They all still hallucinate very consistently if you’re actually pushing them into areas that are complicated and non-obvious, when they can’t figure out an answer, they make one up. The reduction in apparent hallucinations in a recent models seems to be more that they’ve learned specific cases where they should say they don’t know, not that the problem has been solved in a broader sense.

This is true for first party applications, as well as for custom integrations, where I can explicitly check that the context should be grounding them with all of the relevant facts. It doesn’t matter, that isn’t enough, you can tell me I’m holding it wrong, but we’ve consulted with experts from anthropic and from OpenAI and who have done major AI integrations at some of the most prominent AI consuming companies. I’m not holding it wrong. It’s just a horribly flawed piece of technology that must be used with extreme thoughtfulness if you want to do anything non-trivial without massive risks.

I remain convinced that the people who can’t see the massive flaws in current LLM systems must be negligently incompetent in how they perform their jobs. I use LLM’s every day in my work and they are a great help to my productivity, but learning to use them effectively is all about understanding the countless ways in which they fail and thinks that they cannot be relied on for and understanding where they actually provide value.

They do provide value for me in legal research, because sometimes they point me in the direction of caselaw or legal considerations that hadn’t occurred to me. But the majority of the time, the vast majority, their summaries are incorrect, and their arguments are invalid.

LLMs are not capable of reasoning which requires non-obvious jumps of logic which are more than one small step removed from the example that they’ve seen in their training. If you attempt to use them to reason about a legal situation, you will immediately see themselves tie themselves in not because they are not capable of that kind of reasoning, on top of their inability to actually understand in summarize case documents and statutes accurately.


There's a simpler explanation: they are comparing LLM performance to that of regular humans, not perfection.

Where do you think LLMs learned this behavior from? Go spend time in the academic literature outside of computer science and you will find an endless sea of material with BS citations that don't substantiate the claim being made, entirely made up claims with no evidence, citations of retracted papers, nonsensical numbers etc. And that's when papers take months to write and have numerous coauthors, peer reviewers and editors involved (theoretically).

Now read some newspapers or magazines and it's the same except the citations are gone.

If an LLM can meet that same level of performance in a few seconds, it's objectively impressive unless you compare to a theoretical ideal.


Llms are already incredibly able to be great at contextualizing and explaining things. HNs is so allergic to AI, it's incredible. And leaving you behind


They are. I use LLMs. They need to be given context. Which is easy for things that are already on the Internet for them to pull from. When people stop writing news articles that connect events to one another then LLMs have nothing to pull into their context. They are not capable of connecting two random sources.

Edit: also, the primary point is that if everyone uses LLMs for reporting, the loss of revenue will cause the disappearance of the investigative journalism that funds, which LLMs sure as fuck aren’t going to do.


Is this article investigative? Summary of the court case pdf is trivial for an LLM and most will probably do a better job than the linked article. Main difference being you won't be bombarded with ads and other nonsense (at least for now). Hell I wouldn't be surprised if the reporter had an LLM summarize the case before they wrote the article.

Content that can't be easily made by an LLM will still be worth something. But go to most news sites and their content is mostly summarization of someone else's content. LLMs may make that a hard sell.


I think it's a mix of shortsightedness and straight up denial. A lot of people on here were the smart nerdy kid. They are good at programming or electronics or whatever. It became their identity and they are fuckin scared that the one thing they can do well will be taken away rather than putting the new tool in their toolbox.


The problem I may have with using an LLM for this is that I am not already familiar with the subject in detail and won't know when the thing has:

* Strayed from reality

* Strayed from the document and is freely admixing with other information from its training data without saying so. Done properly, this is a powerful tool for synthesis, and LLMs theoretically are great at it, but done improperly it just muddles things

* Has some kind of bias baked in-ironic mdash-"in summary, this ruling is an example of judicial overreach by activist judges against a tech company which should morally be allowed to do what they want". Not such a problem now, but I think we may see more of this once AI is firmly embedded into every information flow. Currently the AI company game is training people to trust the machine. Once they do, what a resource those people become!

Now, none of those points are unique to LLMs: inaccuracy, misunderstanding, wrong or confused synthesis and especially bias are all common in human journalism. Gell-Mann amnesia and institutional bias and all that.

Perhaps the problem is that I'm not sufficiently mistrustful of the status quo, even though I am already quite suspicious of journalistic analysis. Or maybe it's because AI, though my brain screams "don't trust it, check everything, find the source", remains in the toolbox even when I find problems, whereas for a journalist I'd roll my eyes, call them a hack and leave the website.

Not that it's directly relevant to the immediate utility of AI today, but once AI is everything, or almost everything, then my next worry is what happens when you functionally only have published primary material and AI output to train on. Even without model collapse, what happens when AI journobots inherently don't "pick up the phone", so to speak, to dig up details? For the first year, the media runs almost for free. For the second year, there's no higher level synthesis for the past year to lean on and it all regresses to summarising press releases. Again, there are already many human publications that just repackage PRs, but when that's all there is? This problem isn't limited to journalism, but it's a good example.


"Based on the court's memorandum opinion in the case of United States v. Google LLC, Google is required to adhere to a series of remedies aimed at curbing its monopolistic practices in the search and search advertising markets. These remedies address Google's distribution agreements, data sharing, and advertising practices.

Distribution Agreements

A central component of the remedies focuses on Google's distribution agreements to ensure they are not shutting out competitors:

No Exclusive Contracts Google is barred from entering into or maintaining exclusive contracts for the distribution of Google Search, Chrome, Google Assistant, and the Gemini app.

No Tying Arrangements Google cannot condition the licensing of the Play Store or any other Google application on the preloading or placement of its other products like Search or Chrome.

Revenue Sharing Conditions The company is prohibited from conditioning revenue-sharing payments on the exclusive placement of its applications.

Partner Freedom Distribution partners are now free to simultaneously distribute competing general search engines (GSEs), browsers, or generative AI products.

Contract Duration Agreements with browser developers, OEMs, and wireless carriers for default placement of Google products are limited to a one-year term.

Data Sharing and Syndication

To address the competitive advantages Google gained through its exclusionary conduct, the court has ordered the following:

Search Data Access Google must provide "Qualified Competitors" with access to certain search index and user-interaction data to help them improve their services. This does not, however, include advertising data.

Syndication Services Google is required to offer search and search text ad syndication services to qualified competitors on ordinary commercial terms. This will enable smaller firms to provide high-quality search results and ads while they build out their own capabilities.

Advertising Transparency

To promote greater transparency in the search advertising market, the court has mandated that:

Public Disclosure Google must publicly disclose significant changes to its ad auction processes. This is intended to prevent Google from secretly adjusting its ad auctions to increase prices.

What Google is NOT Required to Do

The court also specified several remedies it would not impose:

No Divestiture Google is not required to sell off its Chrome browser or the Android operating system.

No Payment Ban Google can continue to make payments to distribution partners for the preloading or placement of its products. The court reasoned that a ban could harm these partners and consumers.

No Choice Screens The court will not force Google to present users with choice screens on its products or on Android devices, citing a desire to avoid dictating product design.

No Sharing of Granular Ad Data Google is not required to share detailed, query-level advertising data with advertisers.

A "Technical Committee" will be established to assist in implementing and enforcing the final judgment, which will be in effect for six years."

Frankly I don't think that's bad at all. This is from Gemini 2.5 pro


[flagged]


The problem is you started with a link. Explain the case like news did and see what comes out.


A link to the court case (not the news article) seems like a valid starting place when asking for background and a summary of the case.


Right, but AI is suppose to replace news, but youre starting with more context already

Im concerned about bootstraping.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: