I have only anecdotal data from non-technical friends and family.
I’m referring to average people who may not be average users because they’re barely using LLMs in the first place, if at all.
They have maybe tried ChatGPT a few times to generate some silly stories, and maybe come back to it once or twice a month for a question or two, but that’s it.
We’re all colored by our bubbles, and that’s not a study, but it’s something.
For most people AI is stuck at GPT 4 and other on par performance wise models. Anecdotally as well, many people that I know that have tried it found it mildly useful, but experience what coders and other tech workers experienced two years or so ago. Lots of hallucinations, lack of context, knowledge, etc. If you went back to those models you would at best feel like it is just a occasional code helper as well; at best an autocomplete or rather.
A lot of the reasoning model improvements of late are in domains where RL, RLHF and other techniques can be both used and verified with data and training; in particular coding and math as "easy targets" either due to their determinism or domain knowledge of the implementers. Hence it has been quite disruptive to those industries (e.g. AI people know and do a lot of software). I've heard a lot of comments in my circles from other people saying they don't want AI to have the data/context/etc in order to protect their company/job/etc (i.e. their economic moat/value). They look at coding and don't want that to be them - if coding is that hard and it can get automated like that imagine my job.