They're boiling the frog pretty quickly, honestly. The token usage has clearly been an issue since using Claude code from the beginning. It just blows through tokens
The only site that actively funnels me with recommendations is Youtube at this point. In most cases that's fine for my taste because I use youtube as more of a learning platform for things like car mechanics, photography, etc. So it doesn't serve me anything toxic.
For the other social media platforms, my setup shields me from that pretty well.
And it will affect good engineers and turn them into worse engineers too
AI benefits rely on these good engineers having 5, 10, 20 years of experience pre-AI designing (and fully, thoroughly understanding) these systems. What's going to happen to that engineering skill after 15 years of AI use?
It ought to only get better as it gets honed at an even faster pace than before, utilizing techniques and algorithms that would have been out of reach due to outside constraints.
Okay but the user is describing listening to papers, then having to read the papers because listening to them isn't efficient. So why bother listening to it in the first place if you're going to read it?
No, reading verbatim from a technical paper is way too dense. You need a lot of filler words to slow it down and repetition to make it stick when read aloud.
Not the original reply, but I support the correction here. Regardless of how pedantic/nitpicking it seems, I remember getting confused about this a lot when learning digital signal processing. Simply because its really easy to upsample.. or look at an upsampled result and get confused by that
The task of helping to find wording that conveys your thoughts could mean several methods. It could mean you one-shot reword prompts and that helps you find wording. Or it could mean you're taking its output more substantially. Or you're going back and forth where the LLM is suggesting and you're suggesting too. It's incredibly vague what portion of "helping" the LLM is doing!
Whereas "search" implies (to me) a kind of direct and analytical process of listing and throwing out brainstormed suggestions, like you would with a search engine.
When I read the human version I actually get a sense of what that process looks like, and the LLM response definitely clouds or changes it by focusing on the result instead.
reply