One thing these models are extremely good at is reading large amounts of text quickly and summarizing important points. That capability alone may be enough to pay $20 a month for many people.
Why would anyone want to read less and not more? It'd be like reading movie spoilers so you didn't have to sit through 2 hours to find out what happened.
Why is the word grass 5 letters instead of 500? It's because it's a short and efficient way to transfer information. If AI is able to improve information transfer that's amazing
This is why you make sure to compress all your jpegs at 15% quality, so that the information transfer is most efficient, eh?
When I read (when everyone reads), I'm learning new words, new expressions, seeing how other people (the writer in this case) thinks, etc. The point was never just the information. This is why everyone becomes a retard when they rely on the "AI"... we've all seen those horror stories and don't know whether to believe them or not, but we sort of suspect that they must be true if embellished. You know, the ones where the office drone doesn't know how to write a simple email, where the college kid turning in A-graded essays can't scribble out caveman grunts on the paper test. I will refrain from deliberately making myself less intelligent if I have any say in the matter. You're living your life wrong.
Not just summarizing, but also being able to answer follow-up questions about what is in the text.
And, like Wikipedia, they can be useful to find your bearing in a subject that you know nothing about. Unlike Wikipedia, you can ask it free-form questions and have it review your understanding.
I keep hearing anecdotes but the data, like a widely covered BBC study, say they only compress and shorten and routinely fail outside of testing on real world selection of only the most important content or topics.
You don't have to trust my word -- all you have to do is provide an LLM with a text that you are well familiar with and ask the LLM questions about it.