Thing is, we've been here before in a much more limited way; people _hated_ it when Microsoft's demonic paperclip did this in Office. "It looks like you're writing a letter". _Hated_ it.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".
This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.
But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.
Did they, though? Polling fairly consistently shows that people don’t _like_ this stuff, and there’s some evidence that the more familiar with it they become the less they like it. I think Microsoft et al were betting on people liking it (that was certainly their thinking with Clippy, too) but that doesn’t seem to be working out for them.
My sister is ripping her hair out dealing with the interns at he job being extremely tech illiterate with anything thats not app-ified. Many don't know what files are, and are needing run through computer basics because everything they have used has anything technical abstracted away. The post-iPhone generation just wants there hand held and anything technical scares them. Microsoft Bob was just too ahead of the curve.
Not only have the people changed but it is the belief of the elites at the top that humanity is entering a new era of hack-ability. They want to use these AI systems to rewrite humanity into their vision of the future.
Yuval Noah Harari talking about how the new "gods" are the data-centers and how free will is dead in the age of AI.
https://youtu.be/QuL3wlodJC8
Well, for the purpose of this conversation the people at the top of the food chain believe free will exists. They also believe that they can eliminate it with AI and Biomedical manipulation.
The goal with most of these AI features is not to solve a real problem users are having, it's to add a feature that uses AI. This will not change because it's not wrong of the individuals making the decision. The project manager gets to say he shipped a cutting-edge AI project. The developers all get to put experience working with very hireable technologies at a serious company on their resume. There will be no adverse impact to the bottom line, because the cost to develop the shitty AI feature is a drop in the bucket, and the cost to create a competing product that accomplishes the core thing users are using that product for but without feature bloat would be very high, and probably unsuccessful since "less feature bloat" has never been sufficient to break the static friction threshold for users to switch.
So it won't change, because there is no lesson to learn. No individual involved acted irrationally.
It's a design that's in companies' best interests. You can have a computer that's a "friend." One that you trust but ultimately has a mind of its own. This contrasts with a computer that's merely a tool, that serves you exclusvely at your pleasure and has zero agency of its own.
Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?
Based on the experience of 20 years ago, though, users are _extremely_ turned off by it. There's little reason to think this has changed (if anything it is likely more pronounced because Clippy came in kinda without baggage, whereas LLMs have a lot of baggage and most of it ain't great).
> It's a design that's in companies' best interests.
I really don't think it is. Clippy was reputationaly damaging to Microsoft and they had to get rid of it. There's little reason to think this will be different.
Modern Big Tech doesn't particularly care what users think. They know they have network effects on their side and that switching costs are high. So what if it's "reputationally damaging?" What are users going to do? They're just resources to be exploited. Microsoft, Google, and their ilk can treat users with contempt if it means more control and more shareholder value.
Third option: the computer is your enemy, which will follow any sufficiently clever adversary’s orders.
Thinking of a computer as a tool seems reasonable, but thinking of your computer as your friend is clownish (which, I think you agree with based on your last comment).
Slightly offtopic, but I have a friend with synesthesia who sees inanimate objects like people, and they call their computer "macbook friend" (since it's a MacBook Air).
Clippy (and his predecessors, he wasn't one of the first avatars for the feature) might not have been so bad, but marketing got hold of it and decided it didn't pop up often enough for them to really make a big thing of, so it was tuned up to an irritating level.
> It is unclear what the industry thinks has changed
The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.
I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.
I remember when software would ask you on first start what your level of experience was. "Novice, Intermediate, Expert" and would tune the UI to respect that.
Sometimes different modes like that can be more hassle than they are worth, from the dev point of view. You can end up with many more paths to test in order to try make sure your product is bug free.
If the automation is much better at the task than I am, then I am happy to donate it the responsibility: It's a matter of accuracy. Clippy kind of sucked even when he was right about what I was trying to do. For many things, the LLMs are getting good enough to outperform me
The customer base for computing has expanded probably 3 or 4 fold or more from those windows xp days in the US. Maybe for the subset of the population that was word processing back then it was annoying. But now we are looking at a different pie entirely where that subset of annoyed power users is but a tiny sliver. There are people today who have no experience even with a desktop os.
This wasn’t the dark ages; in highly developed countries the ‘computer in every desk’ thing had just about come true. I doubt there are that many more regular word processor users now than in the late clippy era, at least in the developed world.
Thing is, Gmail's been doing this ~forever with quick replies to emails, now it's just doing longer replies instead of "that's great, thanks" level of replies.
But clippy didn't write the letter for me = if I can be lazy and AI formats what I'm communicating in a way that is accessible to other people, then why should I care.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".