Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For anything important I always ask LLMs for links and follow them. I think this will probably just create a strong incentive to cover important things and move away from clickbait.

It's probably a win for everyone in the long run although it means news sites will have to change. That's going to be painful for them but it's not like they're angels either.



I'm surprised the links work for you at all. 90+% of citations for non trivial information (i.e. not in a text book but definitely in the literature) I've gotten from LLMs have been very convincing hallucinations. The year and journal volume will match, the author will be someone who plausibly would have written on the topic, but the articles don't exist and never did. It's a tremendous waste of time and energy compared to old fashioned library search tools.


And what happens when you follow them?

In my experience, the answers tend to be sourced from fringe little blogs that I would never trust in a Google search.

Google at least attempts to rank them by quality, while LLM web search seems to click on the closest match regardless of the (lack of) quality.


Huh that's strange to hear. The HN I remember would have always said the opposite (the small web tends to be higher quality) as do I.


One thing I did once with great success was asking chatgpt something like "I'm trying to find information about X, but when I Google it I just get results about the app named after X. Can you suggest a better query?"

X was some tehnical thing I didn't know a lot about so it gave me some more words to narrow down the query that I would not have known about myself. And that really helped me find the information I needed.

It also threw in some book tips for good measure.

So yeah I can highly recommend this workflow.


> I think this will probably just create a strong incentive to cover important things and move away from clickbait.

But clickbait is how they make money...

That's like saying, "Oh, Apple will just have to move away from selling the iPhone and start selling hamburgers instead."

I mean, sure, but they're not going to like it, and it's going to come with a lot of lost revenue and profits.

I find myself regularly copying URLs, sending it to Gemini, and asking it to answer what I want to get out of the article.

I'm not wasting my time scrolling through a mile of website and 88,000 ads to find the answer to the headline.


Those adverts and clickbait will infect llms soon enough, just be far harder to block.


Yes, unfortunately for those saying AIs will only get better, advertising is a major reason we should expect them to get worse.


Ironically, I wonder if it would inspire a slew of downstream services that use LLMs to clean advertising out of the mainstream LLM responses.


With the huge usage that LLM APIs are getting in all sorts of industries, they cannot be going away, and they're cheap.

If consumer AI chatbots get enshittified, you can just grab some open source bring-your-api-keys front-end, and chat away for peanuts without ads or anything anti-user.

I use https://github.com/sigoden/aichat , but there are GUIs too.

Plus, anyone enterprising can just write a web front-end and sell it as "the ad-free AI chatbot, only $10/mo, usage limits apply".


But what if the AI output contains ads, not the UI or whatever...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: