Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are paying hundreds of dollars a month for these tools, often out of their personal pocket. That's a pretty robust indicator that something interesting is going on.


One thing these models are extremely good at is reading large amounts of text quickly and summarizing important points. That capability alone may be enough to pay $20 a month for many people.


Why would anyone want to read less and not more? It'd be like reading movie spoilers so you didn't have to sit through 2 hours to find out what happened.


Why is the word grass 5 letters instead of 500? It's because it's a short and efficient way to transfer information. If AI is able to improve information transfer that's amazing


This is why you make sure to compress all your jpegs at 15% quality, so that the information transfer is most efficient, eh?

When I read (when everyone reads), I'm learning new words, new expressions, seeing how other people (the writer in this case) thinks, etc. The point was never just the information. This is why everyone becomes a retard when they rely on the "AI"... we've all seen those horror stories and don't know whether to believe them or not, but we sort of suspect that they must be true if embellished. You know, the ones where the office drone doesn't know how to write a simple email, where the college kid turning in A-graded essays can't scribble out caveman grunts on the paper test. I will refrain from deliberately making myself less intelligent if I have any say in the matter. You're living your life wrong.


Because you could do something else during those 2 hours, and are interested in being able to talk about movies but not in watching them?


Not just summarizing, but also being able to answer follow-up questions about what is in the text.

And, like Wikipedia, they can be useful to find your bearing in a subject that you know nothing about. Unlike Wikipedia, you can ask it free-form questions and have it review your understanding.


I keep hearing anecdotes but the data, like a widely covered BBC study, say they only compress and shorten and routinely fail outside of testing on real world selection of only the most important content or topics.


You don't have to trust my word -- all you have to do is provide an LLM with a text that you are well familiar with and ask the LLM questions about it.


Yup! I've done this and it sucks!


> and summarizing important points

Unfortunately the LLM does not (and cannot) know what points are important or not.

If you just want a text summary based on statistical methods, then go ahead, LLMs do this cheaper and better than the previous generation of tools.

If you want actual "importance" then no.


> That capability alone may be enough to pay $20 a month for many people.

Sure, but that's not why me and others now have ~$150/month subscriptions to some of these services.


A tool can feel productive and novel, without actually providing all of the benefits the user thinks it is.


I'm not disputing the value of what these tools can do, even though that is often inflated as well. What I'm arguing against is using language that anthropomorphizes them to make them appear far more capable than they really are. That's dishonest at best, and only benefits companies and their shareholders.


> anthropomorphizes them to make them appear far more

It seems like this argument is frequently brought up just because someone used the words "thinking", or "reasoning" or other similar terms, while true that the LLMs aren't really "reasoning" as a human, the terms are used not because the person actually believes that the LLM is "reasoning like a human" but because the concept of "some junk tokens to get better tokens later" has been implemented under that name. And even with that name, it doesn't mean everyone believes they're doing human reasoning.

It's a bit like a "isomorphic" programming frameworks. They're not talking about the mathematical structures which also bears the name "isomorphic", but rather the name been "stolen" to now mean more things, because it was kind of similar in some way.

I'm not sure what the alternative is, humans been doing this thing of "Ah, this new concept X is kind of similar to concept Y, maybe we reuse the name to describe X for now" for a very long time, and if you understand the context when it's brought up, it seems relatively problem-free to me, most people seem to get it.

It benefits everyone in the ecosystem when terms have shared meaning, so discussions about "reasoning" don't have to use terms like "How an AI uses jumbled starting tokens within the <think> tags to get better tokens later", and can instead just say "How an AI uses reasoning" and people can focus on the actual meat instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: