My experience is the opposite: laypeople are excessively pessimistic on LLM progress ("AI is so dumb. It tells you to put glue on pizza and eat rocks)", usually due to a remembered anecdote that's either years old or reflects worst-case performance (only egregiously bad AI mistakes make the news).
Frontier models are better than they were and "feel" fairly reliable, although all the AI problems of 2021-2022 conceptually do still exist.
Frontier models are better than they were and "feel" fairly reliable, although all the AI problems of 2021-2022 conceptually do still exist.